azure_sdk_for_rust

Rust wrappers around Microsoft Azure REST APIs


Keywords
iot, sdk, cloud, rest, azure, azure-blob, azure-event-hub, azure-table-storage, blob-storage, cosmosdb, microsoft-azure-sdk, rust
License
Apache-2.0

Documentation

Microsoft Azure SDK for Rust

legal

Build Status Coverage Status stability-unstable

tag release commitssince

GitHub contributors

Crate Docs Crates.io Downloads Downloads@Latest
azure_sdk_auth_aad docs Crate cratedown cratelastdown
azure_sdk_core docs Crate cratedown cratelastdown
azure_sdk_cosmos docs Crate cratedown cratelastdown
azure_sdk_service_bus docs Crate cratedown cratelastdown
azure_sdk_storage_account docs Crate cratedown cratelastdown
azure_sdk_storage_blob docs Crate cratedown cratelastdown
azure_sdk_storage_core docs Crate cratedown cratelastdown
azure_sdk_storage_table docs Crate cratedown cratelastdown

Introduction

Microsoft Azure exposes its technologies via REST API. These APIs are easily consumable from any language (good) but are weakly typed. With this library and its related crate you can exploit the power of Microsoft Azure from Rust in a idiomatic way.

This crate relies heavily on the excellent crate called Hyper. As of this library version 0.30.0 all the methods are async/await compliant (futures 0.3).

From version 0.8.0 for Cosmos and 0.9.0 for Storage the repo is embracing the builder pattern. As of 0.10.0, most of storage APIs have been migrated to the builder pattern but there are methods still missing. Please chech the relevant issues to follow the update process. This is still an in-progress transition but the resulting API is much more easy to use. Also most checks have been moved to compile-time. Unfortunately the changes are not backward-compatibile. I have blogged about my appoach here: https://dev.to/mindflavor/rust-builder-pattern-with-types-3chf.

From version 0.12.0 the library switched from hyper-tls to hyper-rustls as suggested by bmc-msft in the issue #120. This should allow the library to be 100% rust.

NOTE: This repository is under heavy development and is likely to break over time. The current releases will probabily contain bugs. As usual open issues if you find any.

Upgrading from pre 0.30.0

From version 0.30.0 the libraries are fully async/await compliant. For the most part, your code should work as before, just replace and_then with await?. Also make sure to specify 2018 as Rust version in your Cargo.toml!

Upgrading from 0.12.0

Starting from version 0.20.0 the monolithic crate has been split in several smaller, more manageable, crates. This means you will have to update both your Cargo.toml and your use statements to use the new version. The names should be self-explanatory; the examples have been updated to use the new crate topology. In case of doubt please do not hesitate to open an issue. As for the functionality, the release 0.20.0 is equivalent to the 0.12.0 so you can migrate to the new crate topology without embedding extra bugs (hopefully! 😉). Since 0.20.1 each crate follows its own versioning. In other words we will increase the version number only of the modified crate (instead of all of them at once). This way you won't need to update your referenced version so often if you use more stable crates (such as storage ones) but the newest one can proceed at their own pace. Please refer to the table above for the bleeding-edge versions. This will also mean releases will detail the most important crate version only (since we cannot have releases for each crate in GitHub). That's why, for example, you could find the release crate_A_0.30.0 before release crate_B_0.27.0.

Disclaimer

Although I am a Microsoft employee, this is not a Microsoft endorsed project. It's simply a pet project of mine: I love Rust (who doesn't? 😏) and Microsoft Azure technologies so I thought to close the gap between them. It's also a good project for learning Rust. This library relies heavily on Hyper. We use the latest Hyper code so this library is fully async with Futures and Tokio.

Example

You can find examples in the examples folder of each sub-crate. Here is a glimpse:

main.rs

#[macro_use]
extern crate serde_derive;
// Using the prelude module of the Cosmos crate makes easier to use the Rust Azure SDK for Cosmos
// DB.
use azure_sdk_core::prelude::*;
use azure_sdk_cosmos::prelude::*;
use futures_util::stream::StreamExt;
use std::borrow::Cow;
use std::error::Error;

// This is the stuct we want to use in our sample.
// Make sure to have a collection with partition key "a_number" for this example to
// work (you can create with this SDK too, check the examples folder for that task).
#[derive(Serialize, Deserialize, Debug)]
struct MySampleStruct<'a> {
    a_string: Cow<'a, str>,
    a_number: u64,
    a_timestamp: i64,
}

// This code will perform these tasks:
// 1. Create 10 documents in the collection.
// 2. Stream all the documents.
// 3. Query the documents.
// 4. Delete the documents returned by task 4.
// 5. Check the remaining documents.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // Let's get Cosmos account and master key from env variables.
    // This helps automated testing.
    let master_key =
        std::env::var("COSMOS_MASTER_KEY").expect("Set env variable COSMOS_MASTER_KEY first!");
    let account = std::env::var("COSMOS_ACCOUNT").expect("Set env variable COSMOS_ACCOUNT first!");

    let database_name = std::env::args()
        .nth(1)
        .expect("please specify the database name as first command line parameter");
    let collection_name = std::env::args()
        .nth(2)
        .expect("please specify the collection name as first command line parameter");

    // First, we create an authorization token. There are two types of tokens, master and resource
    // constrained. This SDK supports both.
    // Please check the Azure documentation for details or the examples folder
    // on how to create and use token-based permissions.
    let authorization_token = AuthorizationToken::new_master(&master_key)?;

    // Next we will create a Cosmos client.
    let client = ClientBuilder::new(account, authorization_token.clone())?;
    // We know the database so we can obtain a database client.
    let database_client = client.with_database(&database_name);
    // We know the collection so we can obtain a collection client.
    let collection_client = database_client.with_collection(&collection_name);

    // TASK 1 - Insert 10 documents
    println!("Inserting 10 documents...");
    for i in 0..10 {
        // define the document.
        let document_to_insert = Document::new(
            format!("unique_id{}", i), // this is the primary key, AKA "/id".
            MySampleStruct {
                a_string: Cow::Borrowed("Something here"),
                a_number: i * 100, // this is the partition key
                a_timestamp: chrono::Utc::now().timestamp(),
            },
        );

        // insert it!
        collection_client
            .create_document()
            .with_document(&document_to_insert)
            .with_partition_keys(PartitionKeys::new().push(&document_to_insert.document.a_number)?)
            .with_is_upsert(true) // this option will overwrite a preexisting document (if any)
            .execute()
            .await?;
    }
    // wow that was easy and fast, wasnt'it? :)
    println!("Done!");

    // TASK 2
    println!("\nStreaming documents");
    // we limit the number of documents to 3 for each batch as a demonstration. In practice
    // you will use a more sensible number (or accept the Azure default).
    let stream = collection_client.list_documents().with_max_item_count(3);
    let mut stream = Box::pin(stream.stream::<MySampleStruct>());
    // TODO: As soon as the streaming functionality is stabilized
    // in Rust we can substitute this while let Some... into
    // for each (or whatever the Rust team picks).
    while let Some(res) = stream.next().await {
        let res = res?;
        println!("Received {} documents in one batch!", res.documents.len());
        res.documents.iter().for_each(|doc| println!("{:#?}", doc));
    }

    // TASK 3
    println!("\nQuerying documents");
    let query_documents_response = collection_client
        .query_documents()
        .with_query(&("SELECT * FROM A WHERE A.a_number < 600".into()))
        .with_query_cross_partition(true) // this will perform a cross partition query! notice how simple it is!
        .execute::<MySampleStruct>()
        .await?;

    println!(
        "Received {} documents!",
        query_documents_response.results.len()
    );

    query_documents_response
        .results
        .iter()
        .for_each(|document| println!("number ==> {}", document.result.a_number));

    // TASK 4
    for ref document in query_documents_response.results {
        println!(
            "deleting id == {}, a_number == {}.",
            document.document_attributes.id, document.result.a_number
        );

        // to spice the delete a little we use optimistic concurreny
        collection_client
            .with_document(&document.document_attributes.id)
            .delete_document()
            .with_partition_keys(PartitionKeys::new().push(&document.result.a_number)?)
            .with_if_match_condition((&document.document_attributes).into())
            .execute()
            .await?;
    }

    // TASK 5
    // Now the list documents should return 4 documents!
    let list_documents_response = collection_client
        .list_documents()
        .execute::<serde_json::Value>() // you can use this if you don't know/care about the return type!
        .await?;
    assert_eq!(list_documents_response.documents.len(), 4);

    Ok(())
}

State of the art

Right now the key framework is in place (authentication, enumerations, parsing and so on). If you want to contribute please do! Methods are added daily so please check the release page for updates on the progress. Also note that the project is in early stages so the APIs are bound to change at any moment. I will strive to keep things steady but since I'm new to Rust I'm sure I'll have to correct some serious mistake before too long 😄. I generally build for the latest nightly and leave to Travis to check the retrocompatibility.

Contributing

If you want to contribute please do! No formality required! 😉. Please note that asking for a pull request you accept to yield your code as per Apache license, version 2.0.

Run E2E test

Linux

export STORAGE_ACCOUNT=<account>
export STORAGE_MASTER_KEY=<key>

export AZURE_SERVICE_BUS_NAMESPACE=<azure_service_bus_namespace>
export AZURE_EVENT_HUB_NAME=<azure_event_hub_name>
export AZURE_POLICY_NAME=<azure_policy_name>
export AZURE_POLICY_KEY=<azure policy key>

export COSMOS_ACCOUNT=<cosmos_account>
export COSMOS_KEY=<cosmos_master_key>

cd azure_sdk_service_bus
cargo test --features=test_e2e

cd ../azure_sdk_storage_blob
cargo test --features=test_e2e

cd ../azure_sdk_storage_account
cargo test --features=test_e2e

cd ../azure_sdk_cosmos
cargo test --features=test_e2e

Windows

set STORAGE_ACCOUNT=<account>
set STORAGE_MASTER_KEY=<key>

set AZURE_SERVICE_BUS_NAMESPACE=<azure_service_bus_namespace>
set AZURE_EVENT_HUB_NAME=<azure_event_hub_name>
set AZURE_POLICY_NAME=<azure_policy_name>
set AZURE_POLICY_KEY=<azure policy key>

set COSMOS_ACCOUNT=<cosmos_account>
set COSMOS_MASTER_KEY=<cosmos_master_key>

cd azure_sdk_service_bus
cargo test --features=test_e2e

cd ../azure_sdk_storage_blob
cargo test --features=test_e2e

cd ../azure_sdk_storage_account
cargo test --features=test_e2e

cd ../azure_sdk_cosmos
cargo test --features=test_e2e

License

This project is published under Apache license, version 2.0.