usls
is a Rust library integrated with ONNXRuntime that provides a collection of state-of-the-art models for Computer Vision and Vision-Language tasks, including:
- YOLO Models: YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, YOLOv11
- SAM Models: SAM, SAM2, MobileSAM, EdgeSAM, SAM-HQ, FastSAM
- Vision Models: RTDETR, RTMO, DB, SVTR, Depth-Anything-v1-v2, DINOv2, MODNet, Sapiens, DepthPro
- Vision-Language Models: CLIP, BLIP, GroundingDINO, YOLO-World, Florence2
Click to expand Supported Models
Model | Task / Type | Example | CUDA f32 | CUDA f16 | TensorRT f32 | TensorRT f16 |
---|---|---|---|---|---|---|
YOLOv5 | Classification Object Detection Instance Segmentation |
demo | β | β | β | β |
YOLOv6 | Object Detection | demo | β | β | β | β |
YOLOv7 | Object Detection | demo | β | β | β | β |
YOLOv8 | Object Detection Instance Segmentation Classification Oriented Object Detection Keypoint Detection |
demo | β | β | β | β |
YOLOv8 | Object Detection Instance Segmentation Classification Oriented Object Detection Keypoint Detection |
demo | β | β | β | β |
YOLOv9 | Object Detection | demo | β | β | β | β |
YOLOv11 | Object Detection Instance Segmentation Classification Oriented Object Detection Keypoint Detection |
demo | β | β | β | β |
RTDETR | Object Detection | demo | β | β | β | β |
FastSAM | Instance Segmentation | demo | β | β | β | β |
SAM | Segment Anything | demo | β | β | ||
SAM2 | Segment Anything | demo | β | β | ||
MobileSAM | Segment Anything | demo | β | β | ||
EdgeSAM | Segment Anything | demo | β | β | ||
SAM-HQ | Segment Anything | demo | β | β | ||
YOLO-World | Object Detection | demo | β | β | β | β |
DINOv2 | Vision-Self-Supervised | demo | β | β | β | β |
CLIP | Vision-Language | demo | β | β | β
Visual β Textual |
β
Visual β Textual |
BLIP | Vision-Language | demo | β | β | β
Visual β Textual |
β
Visual β Textual |
DB | Text Detection | demo | β | β | β | β |
SVTR | Text Recognition | demo | β | β | β | β |
RTMO | Keypoint Detection | demo | β | β | β | β |
YOLOPv2 | Panoptic Driving Perception | demo | β | β | β | β |
Depth-Anything v1 & v2 | Monocular Depth Estimation | demo | β | β | β | β |
MODNet | Image Matting | demo | β | β | β | β |
GroundingDINO | Open-Set Detection With Language | demo | β | β | ||
Sapiens | Body Part Segmentation | demo | β | β | ||
Florence2 | a Variety of Vision Tasks | demo | β | β | ||
DepthPro | Monocular Depth Estimation | demo | β | β |
You have two options to link the ONNXRuntime library
-
-
For detailed setup instructions, refer to the ORT documentation.
-
- Download the ONNX Runtime package from the Releases page.
- Set up the library path by exporting the
ORT_DYLIB_PATH
environment variable:export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.19.0
-
-
Just use
--features auto
cargo run -r --example yolo --features auto
cargo run -r --example yolo # blip, clip, yolop, svtr, db, ...
-
cargo add usls
Or use a specific commit:
[dependencies] usls = { git = "https://github.com/jamjamjon/usls", rev = "commit-sha" }
-
- Build model with the provided
models
andOptions
- Load images, video and stream with
DataLoader
- Do inference
- Retrieve inference results from
Vec<Y>
- Annotate inference results with
Annotator
- Display images and write them to video with
Viewer
example code
use usls::{models::YOLO, Annotator, DataLoader, Nms, Options, Vision, YOLOTask, YOLOVersion}; fn main() -> anyhow::Result<()> { // Build model with Options let options = Options::new() .with_trt(0) .with_model("yolo/v8-m-dyn.onnx")? .with_yolo_version(YOLOVersion::V8) // YOLOVersion: V5, V6, V7, V8, V9, V10, RTDETR .with_yolo_task(YOLOTask::Detect) // YOLOTask: Classify, Detect, Pose, Segment, Obb .with_ixx(0, 0, (1, 2, 4).into()) .with_ixx(0, 2, (0, 640, 640).into()) .with_ixx(0, 3, (0, 640, 640).into()) .with_confs(&[0.2]); let mut model = YOLO::new(options)?; // Build DataLoader to load image(s), video, stream let dl = DataLoader::new( // "./assets/bus.jpg", // local image // "images/bus.jpg", // remote image // "../images-folder", // local images (from folder) // "../demo.mp4", // local video // "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4", // online video "rtsp://admin:kkasd1234@192.168.2.217:554/h264/ch1/", // stream )? .with_batch(2) // iterate with batch_size = 2 .build()?; // Build annotator let annotator = Annotator::new() .with_bboxes_thickness(4) .with_saveout("YOLO-DataLoader"); // Build viewer let mut viewer = Viewer::new().with_delay(10).with_scale(1.).resizable(true); // Run and annotate results for (xs, _) in dl { let ys = model.forward(&xs, false)?; // annotator.annotate(&xs, &ys); let images_plotted = annotator.plot(&xs, &ys, false)?; // show image viewer.imshow(&images_plotted)?; // check out window and key event if !viewer.is_open() || viewer.is_key_pressed(usls::Key::Escape) { break; } // write video viewer.write_batch(&images_plotted)?; // Retrieve inference results for y in ys { // bboxes if let Some(bboxes) = y.bboxes() { for bbox in bboxes { println!( "Bbox: {}, {}, {}, {}, {}, {}", bbox.xmin(), bbox.ymin(), bbox.xmax(), bbox.ymax(), bbox.confidence(), bbox.id(), ); } } } } // finish video write viewer.finish_write()?; Ok(()) }
- Build model with the provided
This project is licensed under LICENSE.