Ray provides a simple, universal API for building distributed applications.


Keywords
ray, distributed, parallel, machine-learning, hyperparameter-tuningreinforcement-learning, deep-learning, serving, python, automl, data-science, deployment, hyperparameter-optimization, hyperparameter-search, java, llm-serving, model-selection, optimization, pytorch, reinforcement-learning, rllib, tensorflow
License
Apache-2.0
Install
pip install ray==2.0.0rc1

Documentation

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:

Learn more about Ray AI Libraries:

  • Data: Scalable Datasets for ML
  • Train: Distributed Training
  • Tune: Scalable Hyperparameter Tuning
  • RLlib: Scalable Reinforcement Learning
  • Serve: Scalable and Programmable Serving

Or more about Ray Core and its key abstractions:

  • Tasks: Stateless functions executed in the cluster.
  • Actors: Stateful worker processes created in the cluster.
  • Objects: Immutable values accessible across the cluster.

Monitor and debug Ray applications and clusters using the Ray dashboard.

Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Why Ray?

Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.

Ray is a unified way to scale Python and AI applications from a laptop to a cluster.

With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.

More Information

Older documents:

Getting Involved

Platform Purpose Estimated Response Time Support Level
Discourse Forum For discussions about development and questions about usage. < 1 day Community
GitHub Issues For reporting bugs and filing feature requests. < 2 days Ray OSS Team
Slack For collaborating with other Ray users. < 2 days Community
StackOverflow For asking questions about how to use Ray. 3-5 days Community
Meetup Group For learning about Ray projects and best practices. Monthly Ray DevRel
Twitter For staying up-to-date on new features. Daily Ray DevRel