Equinox.Templates

Equinox and Propulsion templates: dotnet new eqxweb, eqxwebcs, eqxtestbed, proConsumer, proProjector, proReactor, proSync


Keywords
equinox, fsharp, eventsourcing, cosmosdb, dynamodb, eventstore, changefeedprocessor, kafka, benchmark, changefeed, dotnet-templates, event-sourcing, propulsion
License
Apache-2.0
Install
Install-Package Equinox.Templates -Version 6.3.1

Documentation

Jet dotnet new Templates Build Status release NuGet license code size

This repo hosts the source for Jet's dotnet new templates.

Equinox only

These templates focus solely on Consistent Processing using Equinox Stores:

  • eqxweb - Boilerplate for an ASP .NET Core 3 Web App, with an associated storage-independent Domain project using Equinox.
  • eqxwebcs - Boilerplate for an ASP .NET Core 3 Web App, with an associated storage-independent Domain project using Equinox, ported to C#.
  • eqxtestbed - Host that allows running back-to-back benchmarks when prototyping models using [Equinox]. (https://github.com/jet/equinox), using different stores and/or store configuration parameters.
  • eqxPatterns - Equinox Skeleton Deciders and Tests implementing various event sourcing patterns:
    • Managing a chain of Periods with a Rolling Balance carried forward (aka Closing the Books)
    • Feeding items into a List managed as a Series of Epochs with exactly once ingestion logic

Propulsion related

The following templates focus specifically on the usage of Propulsion components:

  • proProjector - Boilerplate for a Publisher application that

    • consumes events from one of:

      1. (default) --source cosmos: an Azure CosmosDb ChangeFeedProcessor (typically unrolling events from Equinox.CosmosStore stores using Propulsion.CosmosStore)

        • -k --parallelOnly schedule kafka emission to operate in parallel at document (rather than accumulated span of events for a stream) level
      2. --source eventStore: Track an EventStoreDB >= 21.10 instance's $all feed using the gRPC interface (via Propulsion.EventStoreDb)

      3. --source sqlStreamStore: SqlStreamStore's $all feed

      4. --source dynamo

    • -k adds publishing to Apache Kafka using Propulsion.Kafka.

  • proConsumer - Boilerplate for an Apache Kafka Consumer using Propulsion.Kafka (typically consuming from an app produced with dotnet new proProjector -k).

  • periodicIngester - Boilerplate for a service that regularly walks the content of a source, feeding it into a propulsion projector in order to manage the ingestion process using Propulsion.Feed.PeriodicSource

  • proDynamoStoreCdk

    • AWS CDK Wiring for programmatic IaC deployment of Propulsion.DynamoStore.Indexer and Propulsion.DynamoStore.Notifier

Producer/Reactor Templates combining usage of Equinox and Propulsion

The bulk of the remaining templates have a consumer aspect, and hence involve usage of Propulsion. The specific behaviors carried out in reaction to incoming events often use `Equinox components

  • proReactor - Boilerplate for an application that handles reactive actions ranging from publishing notifications via Kafka (simple, or summarising events through to driving follow-on actions implied by events (e.g., updating a denormalized view of an aggregate)

    Input options are:

    1. (default) Propulsion.Cosmos/Propulsion.DynamoStore/Propulsion.EventStoreDb/Propulsion.SqlStreamStore depending on whether the program is run with cosmos, dynamo, es, sss arguments
    2. --source kafkaEventSpans: changes source to be Kafka Event Spans, as emitted from dotnet new proProjector --kafka

    The reactive behavior template has the following options:

    1. Default processing shows importing (in summary form) from an aggregate in EventStore or a CosmosDB ChangeFeedProcessor to a Summary form in Cosmos
    2. --blank: remove sample Ingester logic, yielding a minimal projector
    3. --kafka (without --blank): adds Optional projection to Apache Kafka using Propulsion.Kafka (instead of ingesting into a local Cosmos store). Produces versioned Summary Event feed.
    4. --kafka --blank: provides wiring for producing to Kafka, without summary reading logic etc

    NOTE At present, checkpoint storage when projecting from EventStore uses Azure CosmosDB - help wanted ;)

  • feedSource - Boilerplate for an ASP.NET Core Web Api serving a feed of items stashed in an Equinox.CosmosStore. See dotnet new feedConsumer for the associated consumption logic

  • feedConsumer - Boilerplate for a service consuming a feed of items served by dotnet new feedSource using Propulsion.Feed

  • summaryConsumer - Boilerplate for an Apache Kafka Consumer using Propulsion.Kafka to ingest versioned summaries produced by a dotnet new proReactor --kafka.

  • trackingConsumer - Boilerplate for an Apache Kafka Consumer using Propulsion.Kafka to ingest accumulating changes in an Equinox.Cosmos store idempotently.

  • proSync - Boilerplate for a console app that that syncs events between Equinox.Cosmos and Equinox.EventStore stores using the relevant Propulsion.* libraries, filtering/enriching/mapping Events as necessary.

  • proArchiver - Boilerplate for a console app that that syncs Events from relevant Categories from a Hot container and to an associated warm Equinox.Cosmos stores archival container using the relevant Propulsion.* libraries.

    • An Archiver is intended to run continually as an integral part of a production system.
  • proPruner - Boilerplate for a console app that that inspects Events from relevant Categories in an Equinox.Cosmos store's Hot container and uses that to drive the removal of (archived) Events that have Expired from the associated Hot Container using the relevant Propulsion.* libraries.

    • While a Pruner does not consume a large amount of RU capacity from either the Hot or Warm Containers, running one continually is definitely optional; a Pruner only has a purpose when there are Expired events in the Hot Container; running periodically during low-load periods may be appropriate, depending on the lifetime profile of the events in your system

    • Reducing the traversal frequency needs to be balanced against the primary goal of deleting from the Hot Container: preventing it splitting into multiple physical Ranges.

    • It is necessary to reset the CFP checkpoint (delete the checkpoint documents, or use a new Consumer Group Name) to trigger a re-traversal if events have expired since the lsat time a traversal took place.

  • proCosmosReactor - Stripped down derivative of proReactor template. 🙏 @ragiano215

    • Specific to CosmosDb

    • For applications where the reactions using the same Container, credentials etc as the one being Monitored by the change feed processor (simpler config wiring and less argument processing)

    • includes full wiring for Prometheus metrics emission from the Handler outcomes

  • eqxShipping - Example demonstrating the implementation of a Process Manager using Equinox that manages the enlistment of a set of Shipment Aggregate items into a separated Container Aggregate as an atomic operation. 🙏 @Kimserey.

    • processing is fully idempotent; retries, concurrent or overlapping transactions are intended to be handled thoroughly and correctly
    • if any Shipments cannot be Reserved, those that have been get Revoked, and the failure is reported to the caller
    • includes a Watchdog console app (based on dotnet new proReactor --blank) responsible for concluding abandoned transaction instances (e.g., where processing is carried out in response to a HTTP request and the Clients fails to retry after a transient failure leaves processing in a non-terminal state).
    • Does not include wiring for Prometheus metrics (see proHotel)

  • proHotel - Example demonstrating the implementation of a Process Manager using Equinox that coordinates the merging of a set of GuestStays in a Hotel as a single GroupCheckout activity that coves the payment for each of the stays selected.

    • illustrates correct idempotent logic such that concurrent group checkouts that are competing to cover the same stay work correctly, even when commands are retried.
    • Reactor program is wired to support consuming from MessageDb or DynamoDb.
    • Unit tests validate correct processing of reactions without the use of projection support mechanisms from the Propulsion library.
    • Integration tests establish a Reactor an xUnit.net Collection Fixture (for MessageDb or DynamoDb) or Class Fixtures (for MemoryStore) to enable running scenarios that are reliant on processing that's managed by the Reactor program, without having to run that concurrently.
    • Includes wiring for Prometheus metrics.

Walkthrough

As dictated by the design of dotnet's templating mechanism, consumption is ultimately via the .NET Core SDK's dotnet new CLI facility and/or associated facilities in Visual Studio, Rider etc.

To use from the command line, the outline is:

  1. Install a template locally (use dotnet new --list to view your current list)
  2. Use dotnet new to expand the template in a given directory
# install the templates into `dotnet new`s list of available templates so it can be picked up by
# `dotnet new`, Rider, Visual Studio etc.
dotnet new -i Equinox.Templates

# --help shows the options including wiring for storage subsystems,
# -t includes an example Domain, Handler, Service and Controller to test from app to storage subsystem
dotnet new eqxweb -t --help

# if you want to see a C# equivalent:
dotnet new eqxwebcs -t

# see readme.md in the generated code for further instructions regarding the TodoBackend the above -t switch above triggers the inclusion of
start readme.md

# ... to add an Ingester that reacts to events, as they are written (via EventStore $all or CosmosDB ChangeFeedProcessor) summarising them and feeding them into a secondary stream
# (equivalent to pairing the Projector and Ingester programs we make below)
md -p ../DirectIngester | Set-Location
dotnet new proReactor

# ... to add a Projector
md -p ../Projector | Set-Location
# (-k emits to Kafka and hence implies having a Consumer)
dotnet new proProjector -k
start README.md

# ... to add a Generic Consumer (proProjector -k emits to Kafka and hence implies having a Consumer)
md -p ../Consumer | Set-Location
dotnet new proConsumer
start README.md

# ... to add an Ingester based on the events that Projector sends to kafka
# (equivalent in function to DirectIngester, above)
md -p ../Ingester | Set-Location
dotnet new proReactor --source kafkaEventSpans

# ... to add a Summary Projector
md -p ../SummaryProducer | Set-Location
dotnet new proReactor --kafka 
start README.md

# ... to add a Custom Projector
md -p ../SummaryProducer | Set-Location
dotnet new proReactor --kafka --blank
start README.md

# ... to add a Summary Consumer (ingesting output from `SummaryProducer`)
md -p ../SummaryConsumer | Set-Location
dotnet new summaryConsumer
start README.md

# ... to add a Testbed
md -p ../My.Tools.Testbed | Set-Location
# -e -c # add EventStore and CosmosDb suppport to got with the default support for MemoryStore
dotnet new eqxtestbed -c -e
start README.md
# run for 1 min with 10000 rps against an in-memory store
dotnet run -p Testbed -- run -d 1 -f 10000 memory
# run for 30 mins with 2000 rps against a local EventStore
dotnet run -p Testbed -- run -f 2000 es
# run for two minutes against CosmosDb (see https://github.com/jet/equinox#quickstart) for provisioning instructions
dotnet run -p Testbed -- run -d 2 cosmos

# ... to add a Sync tool
md -p ../My.Tools.Sync | Set-Location
# (-m includes an example of how to upconvert from similar event-sourced representations in an existing store)
dotnet new proSync -m
start README.md

# ... to add a Shipping Domain example containing a Process Manager with a Watchdog Service
md -p ../Shipping | Set-Location
dotnet new eqxShipping

# ... to add a Reactor against a Cosmos container for both listening and writing
md -p ../CosmosReactor | Set-Location
dotnet new proCosmosReactor

# ... to add a Hotel Sample for use with MessageDb or DynamoDb
md -p ../ProGotel | Set-Location
dotnet new proHotel

TESTING

There's integration tests in the repo that check everything compiles before we merge/release

dotnet build build.proj # build Equinox.Templates package, run tests \/
dotnet pack build.proj # build Equinox.Templates package only
dotnet test build.proj -c Release # Test aphabetically newest file in bin/nupkgs only (-c Release to run full tests)

One can also do it manually:

  1. Generate the package (per set of changes you make locally)

    a. ensuring the template's base code compiles (see runnable templates concept in dotnet new docs)

    b. packaging into a local nupkg

     $ cd ~/dotnet-templates
     $ dotnet pack build.proj
     Successfully created package '/Users/me/dotnet-templates/bin/nupkg/Equinox.Templates.3.10.1-alpha.0.1.nupkg'.
    
  2. Test, per variant

    (Best to do this in another command prompt in a scratch area)

    a. installing the templates into the dotnet new local repo

     $ dotnet new -i /Users/me/dotnet-templates/bin/nupkg/Equinox.Templates.3.10.1-alpha.0.1.nupkg
    

    b. get to an empty scratch area

     $ mkdir -p ~/scratch/templs/t1
     $ cd ~/scratch/templs/t1
    

    c. test a variant (i.e. per symbol in the config)

     $ dotnet new proReactor -k # an example - in general you only need to test stuff you're actually changing
     $ dotnet build # test it compiles
     $ # REPEAT N TIMES FOR COMBINATIONS OF SYMBOLS
    
  3. uninstalling the locally built templates from step 2a:

    $ dotnet new -u Equinox.Templates

PATTERNS / GUIDANCE

Use Strongly typed ids

Wherever possible, the samples strongly type identifiers, particularly ones that might naturally be represented as primitives, i.e. string etc.

  • FSharp.UMX is useful to transparently pin types in a message contract cheaply - it works well for a number of contexts:

    • Coding/decoding events using FsCodec. (because Events are things that have happened, validating them is not a central concern as we load and fold these incontrovertible Facts)
    • Model binding in ASP.NET (because the types de-sugar to the primitives, no special support is required). Unlike events, there are more considerations in play in this context though; often you'll want to apply validation to the inputs (representing Commands) as you map them to Value Objects, Making Illegal States Unrepresentable. Often, Single Case Discriminated Unions can be a better tool inb that context

Managing Projections and Reactions with Equinox, Propulsion and FsKafka

Aggregate module conventions

There are established conventions documented in Equinox's module Aggregate overview

Microservice Program.fs conventions

All the templates herein attempt to adhere to a consistent structure for the composition root module (the one containing an Application’s main), consisting of the following common elements:

type Configuration

Responsible for: Loading secrets and custom configuration, supplying defaults when environment variables are not set

Wiring up retrieval of configuration values is the most environment-dependent aspect of the wiring up of an application's interaction with its environment and/or data storage mechanisms. This is particularly relevant where there is variance between local (development time), testing and production deployments. For this reason, the retrieval of values from configuration stores or key vaults is not managed directly within the module Args section

The Configuration type is responsible for encapsulating all bindings to Configuration or Secret stores (Vaults) in order that this does not have to be complected with the argument parsing or defaulting in module Args

  • DO (sparingly) rely on inputs from the command line to drive the lookup process
  • DONT log values (module Args’s Arguments wrappers should do that as applicable as part of the wireup process)
  • DONT perform redundant work to load values if they’ve already been supplied via Environment Variables

module Args

Responsible for: mapping Environment Variables and the Command Line argv to an Arguments model

module Args fulfils three roles:

  1. uses Argu to map the inputs passed via argv to values per argument, providing good error and/or help messages in the case of invalid inputs
  2. responsible for managing all defaulting of input values including echoing them to console such that an operator can infer the arguments in force without having to go look up defaults in a source control repo
  3. expose an object model that the build or start functions can use to succinctly wire up the dependencies without needing to touch Argu, Configuration, or any concrete Configuration or Secrets storage mechanisms
  • DO take values via Argu or Environment Variables
  • DO log the values being applied, especially where defaulting is in play
  • DONT log secrets
  • DONT mix in any application or settings specific logic (no retrieval of values, don’t make people read the boilerplate to see if this app has custom secrets retrieval)
  • DONT invest time changing the layout; leaving it consistent makes it easier for others to scan
  • DONT be tempted to merge blocks of variables into a coupled monster - the intention is to (to the maximum extent possible) group arguments into clusters of 5-7 related items
  • DONT reorder types - it'll just make it harder if you ever want to remix and/or compare and contrast across a set of programs

NOTE: there's a medium term plan to submit a PR to Argu extending it to be able to fall back to environment variables where a value is not supplied, by means of declarative attributes on the Argument specification in the DU, including having the --help message automatically include a reference to the name of the environment variable that one can supply the value through

type Logging

Responsible for applying logging config and setting up loggers for the application

  • DO allow overriding of log level via a command line argument and/or environment variable (by passing Args.Arguments or values from it)

example

type Logging() =

    [<Extension>]
    static member Configure(configuration : LoggingConfiguration, ?verbose) =
        configuration
            .Enrich.FromLogContext()
        |> fun c -> if verbose = Some true then c.MinimumLevel.Debug() else c
        // etc.

start function

The start function contains the specific wireup relevant to the infrastructure requirements of the microservice - it's the sole aspect that is not expected to adhere to a standard layout as prescribed in this section.

example

let start (args : Args.Arguments) =
    …
    (yields a started application loop)

run, main functions

The run function formalizes the overall pattern. It is responsible for:

  1. Managing the correct sequencing of the startup procedure, weaving together the above elements
  2. managing the emission of startup or abnormal termination messages to the console
  • DONT alter the canonical form - the processing is in this exact order for a multitude of reasons
  • DONT have any application specific wire within run - any such logic should live within the start and/or build functions
  • DONT return an int from run; let main define the exit codes in one place

example

let run args = async {
    use consumer = start args
    return! consumer.AwaitWithStopOnCancellation()
}

[<EntryPoint>]
let main argv =
    try let args = Args.parse EnvVar.tryGet argv
        try Log.Logger <- LoggerConfiguration().Configure(verbose=args.Verbose).CreateLogger()
            try run args |> Async.RunSynchronously; 0
            with e when not (e :? MissingArg) -> Log.Fatal(e, "Exiting"); 2
        finally Log.CloseAndFlush()
    with MissingArg msg -> eprintfn "%s" msg; 1
        | :? Argu.ArguParseException as e -> eprintfn "%s" e.Message; 1
        | e -> eprintf "Exception %s" e.Message; 1

CONTRIBUTING

Please don't hesitate to create a GitHub issue for any questions, so others can benefit from the discussion. For any significant planned changes or additions, please err on the side of reaching out early so we can align expectations - there's nothing more frustrating than having your hard work not yielding a mutually agreeable result ;)

See the Equinox repo's CONTRIBUTING section for general guidelines wrt how contributions are considered specifically wrt Equinox.

The following sorts of things are top of the list for the templates:

  • Fixes for typos, adding of info to the readme or comments in the emitted code etc
  • Small-scale cleanup or clarifications of the emitted code
  • support for additional languages in the templates
  • further straightforward starter projects

While there is no rigid or defined limit to what makes sense to add, it should be borne in mind that dotnet new eqx/pro* is sometimes going to be a new user's first interaction with Equinox and/or [asp]dotnetcore. Hence there's a delicate (and intrinsically subjective) balance to be struck between:

  1. simplicity of programming techniques used / beginner friendliness
  2. brevity of the generated code
  3. encouraging good design practices

In other words, there's lots of subtlety to what should and shouldn't go into a template - so discussing changes before investing time is encouraged; agreed changes will generally be rolled out across the repo.