World-class Rust SDK for the Model Context Protocol (MCP) with industry-leading transport implementation and MCP 2025-06-18 specification compliance.
TurboMCP is the premium standard for MCP implementation, delivering enterprise-scale production capabilities with cutting-edge performance:
- 🏆 MCP 2025-06-18 Compliant - 100% specification compliance with next-generation features
- 🚀 Ultra-High Performance - 334,961 msg/sec throughput with SIMD-accelerated JSON
- 🛡️ Enterprise Security - OAuth 2.1 MCP compliance, CORS, rate limiting, security headers, TLS 1.3
-
⚡ Zero-Overhead Macros - Ergonomic
#[server]
,#[tool]
,#[resource]
attributes - 🔗 World-Class Transports - 5 production-ready protocols with bidirectional support
- 🎯 Type Safety - Compile-time validation with automatic schema generation
- 📁 Roots Support - MCP-compliant filesystem boundaries with OS-aware defaults
- 🔄 Production Ready - Circuit breakers, graceful shutdown, session management
- 🎭 Advanced Elicitation - Server-initiated interactive forms with validation
- 🤖 Sampling Protocol - Bidirectional LLM sampling with streaming support
- 🎵 AudioContent Support - Industry-exclusive multimedia content handling
- 📝 Enhanced Annotations - Rich metadata with ISO 8601 timestamps
Add TurboMCP to your Cargo.toml
:
[dependencies]
turbomcp = "1.0.8"
tokio = { version = "1.0", features = ["full"] }
serde_json = "1.0"
Create a simple calculator server:
use turbomcp::prelude::*;
#[derive(Clone)]
struct Calculator;
#[server(
name = "calculator-server",
version = "1.0.0",
description = "A simple calculator"
)]
impl Calculator {
#[tool("Add two numbers")]
async fn add(&self, a: i32, b: i32) -> McpResult<i32> {
Ok(a + b)
}
#[tool("Get server status")]
async fn status(&self, ctx: Context) -> McpResult<String> {
ctx.info("Status requested").await?;
Ok("Server running".to_string())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
Calculator.run_stdio().await?;
Ok(())
}
Add to ~/Library/Application Support/Claude/claude_desktop_config.json
(macOS) or %APPDATA%\Claude\claude_desktop_config.json
(Windows):
{
"mcpServers": {
"my-turbomcp-server": {
"command": "/path/to/your/server/binary",
"args": []
}
}
}
# Test with CLI tool
cargo install turbomcp-cli
# For HTTP/WebSocket servers
turbomcp-cli tools-list --url http://localhost:8080/mcp
# For STDIO servers (like Claude Desktop)
turbomcp-cli tools-list --command "./your-server"
# Test directly
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}' | ./your-server
TurboMCP is the first and only library to achieve complete MCP 2025-06-18 specification compliance, delivering industry-exclusive capabilities:
Handle audio data with rich metadata - no other MCP library supports this:
use turbomcp::prelude::*;
#[server]
impl AudioServer {
#[tool("Process audio data")]
async fn process_audio(&self, audio_data: String) -> McpResult<Content> {
Ok(Content::Audio(AudioContent {
data: audio_data, // Base64 encoded audio
mime_type: "audio/wav".to_string(),
annotations: Some(Annotations {
audience: Some(vec!["user".to_string()]),
priority: Some(0.8),
last_modified: Some("2025-06-18T12:00:00Z".to_string()),
}),
meta: Some(enhanced_metadata),
}))
}
}
Rich metadata with ISO 8601 timestamps and audience targeting:
let annotations = Annotations {
audience: Some(vec!["user".to_string(), "assistant".to_string()]),
priority: Some(0.9), // Priority weighting
last_modified: Some(iso8601_timestamp), // ISO 8601 compliance
};
Proper separation of programmatic names and human-readable titles:
#[server(
name = "mcp-2025-server", // Programmatic identifier
title = "MCP 2025 Feature Server", // Human-readable title
version = "1.0.8"
)]
impl ModernServer { /* ... */ }
Interactive server-initiated user input with advanced schema validation:
use turbomcp::elicitation::*;
let schema = ElicitationSchema::new()
.add_property("user_name".to_string(),
string("Your Name").min_length(2).max_length(50))
.add_property("email".to_string(),
string("Email Address").format("email"))
.require(vec!["user_name".to_string(), "email".to_string()]);
let response = ctx.elicit("Please provide your details", schema).await?;
Production-ready security features with environment-aware configurations:
use turbomcp_transport::{AxumMcpExt, McpServerConfig};
// Production security configuration
let config = McpServerConfig::production()
.with_cors_origins(vec!["https://app.example.com".to_string()])
.with_custom_csp("default-src 'self'; connect-src 'self' wss:")
.with_rate_limit(120, 20) // 120 requests/minute, 20 burst
.with_jwt_auth("your-secret-key".to_string());
let app = Router::new()
.route("/api/status", get(status_handler))
.merge(Router::<()>::turbo_mcp_routes_for_merge(mcp_service, config));
Security Features:
- 🔒 CORS Protection - Environment-aware cross-origin policies
- 📋 Security Headers - CSP, HSTS, X-Frame-Options, and more
- ⚡ Rate Limiting - Token bucket algorithm with flexible strategies
- 🔑 Multi-Auth - JWT validation and API key authentication
- 🔐 TLS Support - Automatic certificate loading with TLS 1.3
Built-in OAuth 2.0 support with Google, GitHub, Microsoft providers:
use turbomcp::prelude::*;
use turbomcp::auth::*;
#[derive(Clone)]
pub struct AuthenticatedServer {
oauth_providers: Arc<RwLock<HashMap<String, OAuth2Provider>>>,
}
#[server]
impl AuthenticatedServer {
#[tool("Get authenticated user profile")]
async fn get_user_profile(&self, ctx: Context) -> McpResult<String> {
if let Some(user_id) = ctx.user_id() {
Ok(format!("Authenticated user: {}", user_id))
} else {
Err(mcp_error!("Authentication required"))
}
}
#[tool("Start OAuth flow")]
async fn start_oauth_flow(&self, provider: String) -> McpResult<String> {
let providers = self.oauth_providers.read().await;
if let Some(oauth_provider) = providers.get(&provider) {
let auth_result = oauth_provider.start_authorization().await?;
Ok(format!("Visit: {}", auth_result.auth_url))
} else {
Err(mcp_error!("Unknown provider: {}", provider))
}
}
}
OAuth Features:
- 🔐 Multiple Providers - Google, GitHub, Microsoft, custom OAuth 2.0
- 🛡️ Always-On PKCE - Security enabled by default
- 🔄 All OAuth Flows - Authorization Code, Client Credentials, Device Code
- 👥 Session Management - User session tracking with cleanup
Robust dependency injection with request correlation:
#[server]
impl ProductionServer {
#[tool("Process with full observability")]
async fn process_data(&self, ctx: Context, data: String) -> McpResult<String> {
// Context provides:
// - Request correlation and distributed tracing
// - Structured logging with metadata
// - Performance monitoring and metrics
// - Dependency injection container access
ctx.info(&format!("Processing: {}", data)).await?;
let start = std::time::Instant::now();
let result = self.database.process(&data).await?;
ctx.info(&format!("Completed in {:?}", start.elapsed())).await?;
Ok(result)
}
}
TurboMCP 1.0.8 delivers industry-leading transport layer implementation with complete MCP 2025-06-18 specification compliance:
- ✅ 100% MCP Protocol Compliance - All 5 transport types fully validated
- ⚡ High Performance - 334,961 messages/second (TCP transport)
- 🔄 Bidirectional Communication - Real-world client-server architecture
- 💾 Memory Efficient - 128 bytes per message average
- 🛡️ Production-Grade Security - Enterprise-ready session management
Transport | Use Case | Performance | Security |
---|---|---|---|
STDIO | Claude Desktop, subprocess | Fast | Protocol isolation |
HTTP/SSE | Web apps, browsers | Streaming | TLS + session mgmt |
WebSocket | Real-time, bidirectional | Low latency | Secure WebSocket |
TCP | High-performance | 334K msg/sec | TLS optional |
Unix Socket | Local IPC, containers | Ultra-fast | File permissions |
⚠️ STDIO Protocol Compliance: When using STDIO transport (default for Claude Desktop), avoid any logging or output to stdout. The MCP protocol requires stdout to contain only JSON-RPC messages. Any other output will break client communication.
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let server = Calculator::new();
match std::env::var("TRANSPORT").as_deref() {
// High-performance TCP with session management
Ok("tcp") => {
server.run_tcp("127.0.0.1:8080").await?
},
// Unix socket with Tokio best practices
Ok("unix") => {
server.run_unix("/tmp/mcp.sock").await?
},
// HTTP/SSE with streaming support
Ok("http") => {
server.run_http("127.0.0.1:8080").await?
},
// WebSocket for real-time communication
Ok("websocket") => {
server.run_websocket("127.0.0.1:8081").await?
},
// STDIO for Claude Desktop (default)
_ => {
// CRITICAL: No logging for STDIO - pure JSON-RPC only
server.run_stdio().await?
}
}
Ok(())
}
- 🔄 Automatic Reconnection - Circuit breakers with exponential backoff
- 📊 Real-time Metrics - Message throughput and latency tracking
- 🗃️ Session Persistence - Stateful connections with cleanup
- ⚖️ Load Balancing - Multi-connection support for scaling
- 🛡️ Error Recovery - Robust handling of connection failures
Production-ready shutdown handling:
use tokio::signal;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let server = MyServer::new();
let (server, shutdown_handle) = server.into_server_with_shutdown()?;
let server_task = tokio::spawn(async move {
server.run_stdio().await
});
signal::ctrl_c().await?;
// NOTE: For STDIO transport, avoid logging to prevent JSON-RPC pollution
// For other transports, you could use: tracing::info!("Shutdown signal received");
shutdown_handle.shutdown().await;
server_task.await??;
Ok(())
}
TurboMCP's modular, compile-time optimized architecture separates concerns while maximizing performance:
Our architecture prioritizes compile-time computation over runtime flexibility. While this creates a more sophisticated build process, it delivers unmatched runtime performance and predictability. Each crate is optimized for its specific role, with aggressive feature gating to ensure minimal production footprints.
Crate | Purpose | Key Features |
---|---|---|
turbomcp |
Main SDK | Procedural macros, prelude, integration |
turbomcp-core |
Core types | SIMD message handling, sessions, errors |
turbomcp-protocol |
MCP protocol | JSON-RPC, schema validation, versioning |
turbomcp-transport |
Transport layer | HTTP, WebSocket, circuit breakers |
turbomcp-server |
Server framework | Routing, authentication, middleware |
turbomcp-client |
Client library | Connection management, error recovery |
turbomcp-macros |
Proc macros |
#[server] , #[tool] , #[resource]
|
turbomcp-cli |
CLI tools | Testing, schema export, debugging |
Configure filesystem roots for secure server operations:
use turbomcp::prelude::*;
// Method 1: Using macro attributes (recommended)
#[server(
name = "filesystem-server",
version = "1.0.0",
root = "file:///workspace:Project Workspace",
root = "file:///tmp:Temporary Files",
root = "file:///Users/shared:Shared Documents"
)]
impl FileSystemServer {
#[tool("List files in directory")]
async fn list_files(&self, ctx: Context, path: String) -> McpResult<Vec<String>> {
ctx.info(&format!("Listing files in: {}", path)).await?;
// File operations are bounded by configured roots
Ok(vec!["file1.txt".to_string(), "file2.txt".to_string()])
}
}
// Method 2: Using ServerBuilder API
use turbomcp_server::ServerBuilder;
use turbomcp_protocol::types::Root;
let server = ServerBuilder::new()
.name("filesystem-server")
.version("1.0.0")
.root("file:///workspace", Some("Workspace".to_string()))
.roots(vec![
Root { uri: "file:///tmp".to_string(), name: Some("Temp".to_string()) }
])
.build();
Roots Features:
- 📁 Multiple Configuration Methods - Macro attributes, builder API, and runtime
- 🖥️ OS-Aware Defaults - Automatic platform-specific roots (Linux:
/
, macOS:/
,/Volumes
, Windows: drive letters) - 🔒 Security Foundation - Establish filesystem operation boundaries
- 🔗 MCP Compliance - Full support for
roots/list
protocol method
Server-initiated requests for interactive user input with world-class ergonomics:
use turbomcp::prelude::*;
use turbomcp::elicitation_api::{ElicitationResult, text, checkbox, integer_field};
use turbomcp_macros::elicit;
#[tool("Configure deployment")]
async fn deploy(&self, ctx: Context, project: String) -> McpResult<String> {
// Simple elicit macro for quick prompts
let confirmation = elicit!(ctx, "Deploy to production?")?;
if matches!(confirmation, ElicitationResult::Decline(_)) {
return Ok("Deployment cancelled".to_string());
}
// Advanced form with beautiful ergonomic builders
let config = elicit("Configure deployment")
.field("environment", text("Environment").options(&["dev", "staging", "production"]))
.field("auto_scale", checkbox("Enable Auto-scaling").default(true))
.field("replicas", integer_field("Replica Count").range(1.0, 10.0))
.require(vec!["environment"])
.send(&ctx.request)
.await?;
match config {
ElicitationResult::Accept(data) => {
let env = data.get::<String>("environment")?;
let replicas = data.get::<i64>("replicas").unwrap_or(1);
Ok(format!("Deployed {} to {} with {} replicas", project, env, replicas))
}
_ => Err(mcp_error!("Deployment cancelled"))
}
}
Automatic JSON schema generation with validation:
#[tool("Process user data")]
async fn process_user(
&self,
#[description("User's email address")]
email: String,
#[description("User's age in years")]
age: u8,
) -> McpResult<UserProfile> {
// Schema automatically generated and validated
Ok(UserProfile { email, age })
}
URI template-based resource handling:
#[resource("file://{path}")]
async fn read_file(&self, path: String) -> McpResult<String> {
tokio::fs::read_to_string(&path).await
.map_err(|e| mcp_error!("Resource error: {}", e))
}
Optimize binary size by selecting only needed transports:
# Minimal STDIO-only server
turbomcp = { version = "1.0.3", default-features = false, features = ["minimal"] }
# Network deployment with TCP + Unix
turbomcp = { version = "1.0.3", default-features = false, features = ["network"] }
# All transports for maximum flexibility
turbomcp = { version = "1.0.3", default-features = false, features = ["all-transports"] }
Install the CLI for development and testing:
cargo install turbomcp-cli
Usage:
# List available tools (HTTP)
turbomcp-cli tools-list --url http://localhost:8080/mcp
# List available tools (STDIO)
turbomcp-cli tools-list --command "./my-server"
# Call a tool with arguments
turbomcp-cli tools-call --url http://localhost:8080/mcp --name add --arguments '{"a": 5, "b": 3}'
# Export JSON schemas to file
turbomcp-cli schema-export --url http://localhost:8080/mcp --output schemas.json
TurboMCP's leapfrog architecture delivers superior performance through fundamental design choices:
- Zero-overhead abstractions - All tool registration and schema generation happens at compile time
-
Macro-powered efficiency -
#[server]
and#[tool]
macros eliminate runtime reflection overhead - Type-driven performance - Rust's type system enables aggressive optimizations impossible in dynamic languages
- Smart feature gating - While our codebase is comprehensive, compile-time feature selection ensures lean production binaries
-
JSON Processing - 2-3x faster than
serde_json
with SIMD acceleration -
Memory Efficiency - Zero-copy message handling with
Bytes
eliminates unnecessary allocations - Concurrency - Tokio-based async runtime with efficient task scheduling
- Cold Start Speed - Pre-computed schemas and handlers enable faster initialization
- Reliability - Circuit breakers and connection pooling with intelligent failover
Our compile-time first approach means:
- No runtime schema generation - Schemas computed at compile time, not during requests
- Direct handler dispatch - O(1) tool lookup without HashMap traversals or string matching
- Zero reflection - All type information resolved statically
- Predictable performance - No garbage collection pauses or dynamic allocation surprises
The TurboMCP advantage: While some frameworks trade complexity for simplicity, we've engineered complexity away through compile-time optimization. Users get maximum performance with minimal cognitive overhead.
TurboMCP includes 12 carefully crafted examples that guide you from basics to production deployment. See the Examples Guide for a complete learning path.
- 01_hello_world - Your first MCP server
- 02_clean_server - Minimal server using macros
- 03_basic_tools - Tool parameters and validation
- 04_resources_and_prompts - Resources and prompts system
- 05_stateful_patterns - State management patterns
- 06_architecture_patterns - Builder vs Macro APIs
- 07_transport_showcase - All transport methods
- 08_elicitation_complete - Server-initiated prompts
- 09_bidirectional_communication - All handler types
- 10_protocol_mastery - Complete protocol coverage
- 11_production_deployment - Enterprise features
- 06b_architecture_client - HTTP client example
Run any example:
# Basic examples
cargo run --example 01_hello_world
cargo run --example 02_clean_server
# Architecture comparison (run in separate terminals)
cargo run --example 06_architecture_patterns builder # Terminal 1
cargo run --example 06_architecture_patterns macro # Terminal 2
cargo run --example 06b_architecture_client # Terminal 3 - test client
Setup:
git clone https://github.com/Epistates/turbomcp.git
cd turbomcp
cargo build --workspace
Testing:
make test # Run comprehensive test suite
cargo test --workspace # Run all tests
cargo test --all-features # Test with all features
Quality:
cargo fmt --all # Format code
cargo clippy --workspace --all-targets # Lint code
cargo bench --workspace # Run benchmarks
- API Documentation - Complete API reference
- Security Guide - Comprehensive security documentation
- Architecture Guide - System design and components
- MCP Specification - Official protocol docs
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes and add tests
- Run the full test suite:
make test
- Submit a pull request
Please ensure all tests pass and follow the existing code style.
Licensed under the MIT License.
- Model Context Protocol - Official protocol specification
- Claude Desktop - AI assistant with MCP support
- MCP Servers - Official server implementations