This library provides implementations of concurrent FIFO queues (for both general boxed and primitive unboxed values) that are fast, perform well under contention, and offer a Chan-like interface. The library may be of limited usefulness outside of x86 architectures where the fetch-and-add instruction is not available. We export several variations of our design; some support additional functionality while others try for lower latency by removing features or making them more restrictive (e.g. in the Unboxed variants). Unagi: a general-purpose near drop-in replacement for Chan. Unagi.Unboxed: like Unagi but specialized for primitive types; this may perform better if a queue grows very large. Unagi.Bounded: a bounded variant with blocking and non-blocking writes, and other functionality where a notion of the queue's capacity is required. Unagi.NoBlocking: lowest latency implementations for when blocking reads aren't required. Unagi.NoBlocking.Unboxed: like Unagi.NoBlocking but for primitive types. Some of these may be deprecated in the future if they are found to provide little performance benefit, or no unique features; you should benchmark and experiment with them for your use cases, and please submit pull requests for additions to the benchmark suite that reflect what you find. Here is an example benchmark measuring the time taken to concurrently write and read 100,000 messages, with work divided amongst increasing number of readers and writers, comparing against the top-performing queues in the standard libraries. The inset graph shows a zoomed-in view on the implementations here.


Keywords
concurrency, library, Propose Tags, , Index, Quick Jump, Control.Concurrent.Chan.Unagi, Control.Concurrent.Chan.Unagi.Bounded, Control.Concurrent.Chan.Unagi.NoBlocking, Control.Concurrent.Chan.Unagi.NoBlocking.Unboxed, Control.Concurrent.Chan.Unagi.Unboxed, More info, unagi-chan-0.4.1.4.tar.gz, browse, Package description, Package maintainers, BrandonSimmons, edit package information , 0.4.1.1, 0.4.1.2, 0.4.1.3, 0.4.1.4, haskell, queue
License
BSD-3-Clause
Install
cabal install unagi-chan-0.4.1.4

Documentation

Build Status

The library is available on hackage and you can install it with:

$ cabal install unagi-chan

Design

The idea is to design a queue around the x86 fetch-and-add instruction, which performs well under contention.

The queue is conceptually simple, consisting of: an infinite array, and two atomic counters, one for readers and another for writers. A read or write operation consists of incrementing the appropriate counter and racing to perform an atomic operation on the specified index.

If the writer wins it has written its value for the reader to find and exits. When it loses it does a rendezvous with the blocked or blocking reader, via another mechanism and hands off its value.

Linearizabillity

The queue has FIFO semantics, reasoning in terms of linearizability. Our atomic counter ensures that all non-overlapping reads and writes are assigned indices in temporal order.

Lockfree - ness

Operations are non-blocking, with the exception that a stalled writer may block at most one reader (the reader "assigned" to it by our internal counter).

Performance

Here is an example benchmark measuring the time taken to concurrently write and read 100,000 messages, with work divided amongst increasing number of readers and writers, comparing against the top-performing queues in the standard libraries. The inset graph shows a zoomed-in view on the implementations here.

Benchmarks

Some of these variants may be deprecated in the future if they are found to provide little performance benefit, or no unique features; you should benchmark and experiment with them for your use cases, and please submit pull requests for additions to the benchmark suite that reflect what you find.