Arcadia is a one-stop enterprise-grade LLMOps platform that provides a unified interface for developers and operators to build, debug,deploy and manage AI agents with a orchestration engine(RAG(Retrieval Augmented Generation) and LLM finetuning has been supported).
- Build,debug,deploy AI agents on ops-console(GUI for LLMOps)
- Chat with AGI agent on agent-portal(GUI for gpt chat)
- Enterprise-grade infratructure with KubeBB: Multi-tenant isolation (data, model services), built-in OIDC, RBAC, and auditing, supporting different companies and departments to develop through a unified platform
- Support most of the popular LLMs(large language models),embedding models,reranking models,etc..
- Inference acceleration with vllm,distributed inference with ray,quantization, and more
- Support fine-tuining with llama-factory
- Built on langchaingo(golang), has better performance and maintainability
Our design and development in Arcadia design follows operator pattern which extends Kubernetes APIs.
For details, check Architecture Overview
Visit our online documents
Read user guide
- bge-reranker-large reranking
- bce-reranking reranking
Fully compatible with langchain vectorstores
- β PG Vector, KubeAGI adds the PG vector support to langchaingo project.
- β ChromaDB
Thanks to langchaingo,we can have comprehensive AI capability in Golang!But in order to meet our own unique needs, we have further developed a number of other toolchains:
- Optimized DocumentLoaders: optimized csv,etc...
- Extended LLMs: zhipuai,dashscope,etc...
- Tools: bingsearch,weather,etc...
- AppRuntime: powerful node(LLM,Chain,KonwledgeBase,vectorstore,Agent,etc...) orchestration runtime for arcadia
We have provided some examples on how to use them. See more details at here
If you want to contribute to Arcadia, refer to contribute guide.
If you need support, start with the troubleshooting guide, or create GitHub issues