plex 🧫×🧬→💊
⚡ Run highly reproducible scientific applications on top of a decentralised compute and storage network. ⚡
Plex is a simple client for distributed computation.
- 🌎 Build once, run anywhere: Plex is using distributed compute and storage to run containers on a public network. Need GPUs? We got you covered.
- 🔍 Content-addressed by default: Every file processed by plex has a deterministic address based on its content. Keep track of your files and always share the right results with other scientists.
- 🪙 Ownernship tracking built-in Every compute event on plex is mintable as an on-chain token that grants the holder rights over the newly generated data.
- 🔗 Strictly composable: Every tool in plex has declared inputs and outputs. Plugging together tools by other authors should be easy.
Plex is based on Bacalhau, IPFS, and inspired by the Common Workflow Language.
🐍 Python pip package (Python 3.8+)
- Install plex with pip
pip install PlexLabExchange
- Run plex example in a Python file, notebook or REPL
from plex import plex_run
io_json_cid, io_json_local_filepath = plex_run('QmWdKXmSz1p3zGfHmwBb5FHCS7skc4ryEA97pPVxJCT5Wx')
🚀 Plex CLI in one minute
1 . Install the client
Mac/Linux users open terminal and run
source <(curl -sSL https://raw.githubusercontent.com/labdao/plex/main/install.sh)
Windows users open terminal as an adminstrator and run
Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/labdao/plex/main/install.ps1" -UseBasicParsing).Content
- Submit an example plex job
./plex init -t tools/equibind.json -i '{"protein": ["testdata/binding/abl/7n9g.pdb"], "small_molecule": ["testdata/binding/abl/ZINC000003986735.sdf"]}' --scatteringMethod=dotProduct --autoRun=true
-
Read the docs to learn how to use plex with your own data and tools
-
Request Access to our VIP Jupyter Hub Enviroment and NFT Testnet Minting. VIP Beta Access Form
💡 Use-Cases
- 🧬 run plex to fold proteins
- 💊 run plex to run small molecule docking
- 🐋 configure your containerised tool to run on plex
🧑💻 Developer Guide
Building plex from source
git clone https://github.com/labdao/plex
cd plex
go build
Running web app locally
Setup
Frontend Development Only
$ export NEXT_PUBLIC_BACKEND_URL=https://api.prod.labdao.xyz
$ export NEXT_PUBLIC_IPFS_GATEWAY_ENDPOINT=http://bacalhau.prod.labdao.xyz:8080/ipfs/
$ cd frontend
$ npm install
$ npm run dev
- Install docker
- Define necessary env variables
NEXT_PUBLIC_BACKEND_URL=http://localhost:8080
FRONTEND_URL=http://localhost:3000
POSTGRES_PASSWORD=MAKE_UP_SOMETHING_RANDOM
POSTGRES_USER=labdao
POSTGRES_DB=labdao
POSTGRES_HOST=localhost
- Recommended: Install direnv. With it installed you can create
.env
file with the above environment variables and have them automagically set when you descend into the folder.
Running complete stack locally
We have docker-compose
files available to bring up the stack locally.
Note:
- Only
amd64
architecture is currently supported. - New docker installation include docker compose, older installations required you install docker-compose separately and run
docker-compose up -d
Running
# Optionally, build in parallel before running
docker compose build --parallel
# Build and bring up stack
docker compose up -d --wait
To run plex
cli against local environment simply set BACALHAU_API_HOST=127.0.0.1
Running with private IPFS
Requirement to have
ipfs
available locally.
docker compose -f docker-compose.yml -f docker-compose.private.yml up -d --wait
To run plex
cli against local private environment export
the following params to your shell before executing plex
commands:
# using temp directory for ipfs repo
export IPFS_PATH=$(mktemp -d)
# Initialize IPFS repo
ipfs init -e
# Copy over swarm key and config
cp -rav $(pwd)/docker/ipfs_data/* "${IPFS_PATH}/"
export BACALHAU_API_HOST="127.0.0.1"
export BACALHAU_SERVE_IPFS_PATH="${IPFS_PATH}"
export BACALHAU_IPFS_SWARM_ADDRESSES="/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWLpoHJCGxxKozRaUK1e1m2ocyVPB9dzbsU2cydujYBCD7"
Running backend database only
docker compose up -d dbbackend --wait
Start the Frontend React App
npm --prefix ./frontend run dev
Start the Backend Go App
go run main.go web
Running a compute node
This is a script for setting up a compute instance to run LabDAO jobs. Requires linux OS with Nvidia GPU.
Tested on Ubuntu 20.04 LTS with Nvidia T4, V100, and A10 GPUs (AWS G4, P3, and G5 instance types)
The install script sets up Docker, Nvidia Drivers, Nvidia Container Toolkit, and IPFS
curl -sL https://raw.githubusercontent.com/labdao/plex/main/scripts/provide-compute.sh | bash && newgrp docker
After the script run the following command in a separate terminal to start a Bacalhau server to accept jobs.
ipfs daemon
Once the daemon is running, configure the Bacalhau node based on the addresses used by the IPFS node.
ipfs id
# copy the ip4 tcp output and change port 4001 to 5001 then export
export IPFS_CONNECT=/ip4/127.0.0.1/tcp/5001/p2p/<your id goes here>
# example: export IPFS_CONNECT=/ip4/127.0.0.1/tcp/5001/p2p/12D3KooWPH1BpPfNXwkf778GMP2H5z7pwjKVQFnA5NS3DngU7pxG
LOG_LEVEL=debug bacalhau serve --job-selection-accept-networked --limit-total-gpu 1 --limit-total-memory 12gb --ipfs-connect $IPFS_CONNECT
To download large bacalhau results the below command may need ran
sudo sysctl -w net.core.rmem_max=2500000
💁 Contributing
PRs are welcome! Please consider our Contribute Guidelines when joining.
From time to time, we also post help-wanted
bounty issues - please consider our Bounty Policy when engaging with LabDAO.