cuvs-cu11

cuVS: Vector Search on the GPU


Keywords
anns, clustering, cuda, distance, gpu, information-retrieval, llm, machine-learning, nearest-neighbors, neighborhood-methods, similarity-search, sparse, statistics, vector-search, vector-similarity, vector-store
License
Apache-2.0
Install
pip install cuvs-cu11==24.8.0

Documentation

 cuVS: Vector Search and Clustering on the GPU

Note

cuVS is a new library mostly derived from the approximate nearest neighbors and clustering algorithms in the RAPIDS RAFT library of machine learning and data mining primitives. As of version 24.10 (Release in October 2024), cuVS contains the most fully-featured versions of the approximate nearest neighbors and clustering algorithms from RAFT. The algorithms which have been migrated over to cuVS will be removed from RAFT in version 24.12 (released in December 2024).

Contents

  1. Useful Resources
  2. What is cuVS?
  3. Installing cuVS
  4. Getting Started
  5. Contributing
  6. References

Useful Resources

What is cuVS?

cuVS contains state-of-the-art implementations of several algorithms for running approximate nearest neighbors and clustering on the GPU. It can be used directly or through the various databases and other libraries that have integrated it. The primary goal of cuVS is to simplify the use of GPUs for vector similarity search and clustering.

Vector search is an information retrieval method that has been growing in popularity over the past few years, partly because of the rising importance of multimedia embeddings created from unstructured data and the need to perform semantic search on the embeddings to find items which are semantically similar to each other.

Vector search is also used in data mining and machine learning tasks and comprises an important step in many clustering and visualization algorithms like UMAP, t-SNE, K-means, and HDBSCAN.

Finally, faster vector search enables interactions between dense vectors and graphs. Converting a pile of dense vectors into nearest neighbors graphs unlocks the entire world of graph analysis algorithms, such as those found in GraphBLAS and cuGraph.

Below are some common use-cases for vector search

  • Semantic search

    • Generative AI & Retrieval augmented generation (RAG)
    • Recommender systems
    • Computer vision
    • Image search
    • Text search
    • Audio search
    • Molecular search
    • Model training
  • Data mining

    • Clustering algorithms
    • Visualization algorithms
    • Sampling algorithms
    • Class balancing
    • Ensemble methods
    • k-NN graph construction

Why cuVS?

There are several benefits to using cuVS and GPUs for vector search, including

  1. Fast index build
  2. Latency critical and high throughput search
  3. Parameter tuning
  4. Cost savings
  5. Interoperability (build on GPU, deploy on CPU)
  6. Multiple language support
  7. Building blocks for composing new or accelerating existing algorithms

In addition to the items above, cuVS takes on the burden of keeping non-trivial accelerated code up to date as new NVIDIA architectures and CUDA versions are released. This provides a deslightful development experimence, guaranteeing that any libraries, databases, or applications built on top of it will always be getting the best performance and scale.

cuVS Technology Stack

cuVS is built on top of the RAPIDS RAFT library of high performance machine learning primitives and provides all the necessary routines for vector search and clustering on the GPU.

cuVS is built on top of low-level CUDA libraries and provides many important routines that enable vector search and clustering on the GPU

Installing cuVS

cuVS comes with pre-built packages that can be installed through conda and pip. Different packages are available for the different languages supported by cuVS:

Python C/C++
cuvs libcuvs

Stable release

It is recommended to use mamba to install the desired packages. The following command will install the Python package. You can substitute cuvs for any of the packages in the table above:

conda install -c conda-forge -c nvidia -c rapidsai cuvs

The cuVS Python package can also be installed through pip <https://docs.rapids.ai/install#pip>_.

For CUDA 11 packages:

pip install cuvs-cu11 --extra-index-url=https://pypi.nvidia.com

And CUDA 12 packages:

pip install cuvs-cu12 --extra-index-url=https://pypi.nvidia.com

Nightlies

If installing a version that has not yet been released, the rapidsai channel can be replaced with rapidsai-nightly:

conda install -c conda-forge -c nvidia -c rapidsai-nightly cuvs=25.02

cuVS also has pip wheel packages that can be installed. Please see the Build and Install Guide for more information on installing the available cuVS packages and building from source.

Getting Started

The following code snippets train an approximate nearest neighbors index for the CAGRA algorithm in the various different languages supported by cuVS.

Python API

from cuvs.neighbors import cagra

dataset = load_data()
index_params = cagra.IndexParams()

index = cagra.build(build_params, dataset)

C++ API

#include <cuvs/neighbors/cagra.hpp>

using namespace cuvs::neighbors;

raft::device_matrix_view<float> dataset = load_dataset();
raft::device_resources res;

cagra::index_params index_params;

auto index = cagra::build(res, index_params, dataset);

For more code examples of the C++ APIs, including drop-in Cmake project templates, please refer to the C++ examples directory in the codebase.

C API

#include <cuvs/neighbors/cagra.h>

cuvsResources_t res;
cuvsCagraIndexParams_t index_params;
cuvsCagraIndex_t index;

DLManagedTensor *dataset;
load_dataset(dataset);

cuvsResourcesCreate(&res);
cuvsCagraIndexParamsCreate(&index_params);
cuvsCagraIndexCreate(&index);

cuvsCagraBuild(res, index_params, dataset, index);

cuvsCagraIndexDestroy(index);
cuvsCagraIndexParamsDestroy(index_params);
cuvsResourcesDestroy(res);

For more code examples of the C APIs, including drop-in Cmake project templates, please refer to the C examples

Rust API

use cuvs::cagra::{Index, IndexParams, SearchParams};
use cuvs::{ManagedTensor, Resources, Result};

use ndarray::s;
use ndarray_rand::rand_distr::Uniform;
use ndarray_rand::RandomExt;

/// Example showing how to index and search data with CAGRA
fn cagra_example() -> Result<()> {
    let res = Resources::new()?;

    // Create a new random dataset to index
    let n_datapoints = 65536;
    let n_features = 512;
    let dataset =
        ndarray::Array::<f32, _>::random((n_datapoints, n_features), Uniform::new(0., 1.0));

    // build the cagra index
    let build_params = IndexParams::new()?;
    let index = Index::build(&res, &build_params, &dataset)?;
    println!(
        "Indexed {}x{} datapoints into cagra index",
        n_datapoints, n_features
    );

    // use the first 4 points from the dataset as queries : will test that we get them back
    // as their own nearest neighbor
    let n_queries = 4;
    let queries = dataset.slice(s![0..n_queries, ..]);

    let k = 10;

    // CAGRA search API requires queries and outputs to be on device memory
    // copy query data over, and allocate new device memory for the distances/ neighbors
    // outputs
    let queries = ManagedTensor::from(&queries).to_device(&res)?;
    let mut neighbors_host = ndarray::Array::<u32, _>::zeros((n_queries, k));
    let neighbors = ManagedTensor::from(&neighbors_host).to_device(&res)?;

    let mut distances_host = ndarray::Array::<f32, _>::zeros((n_queries, k));
    let distances = ManagedTensor::from(&distances_host).to_device(&res)?;

    let search_params = SearchParams::new()?;

    index.search(&res, &search_params, &queries, &neighbors, &distances)?;

    // Copy back to host memory
    distances.to_host(&res, &mut distances_host)?;
    neighbors.to_host(&res, &mut neighbors_host)?;

    // nearest neighbors should be themselves, since queries are from the
    // dataset
    println!("Neighbors {:?}", neighbors_host);
    println!("Distances {:?}", distances_host);
    Ok(())
}

For more code examples of the Rust APIs, including a drop-in project templates, please refer to the Rust examples.

Contributing

If you are interested in contributing to the cuVS library, please read our Contributing guidelines. Refer to the Developer Guide for details on the developer guidelines, workflows, and principles.

References

For the interested reader, many of the accelerated implementations in cuVS are also based on research papers which can provide a lot more background. We also ask you to please cite the corresponding algorithms by referencing them in your own research.