Robodog Code is a lightweight, zero-install, fast, command-line style generative AI client that integrates multiple providers (OpenAI, OpenRouter, LlamaAI, DeepSeek, Anthropic, Sarvam AI, Google Search API, and more) into a unified interface. Key capabilities include:
NEVER TRUST A CODE SPEWING ROBOT!
- Access to cutting-edge models:
o4-mini
(200k context),gpt-4
,gpt-4-turbo
,dall-e-3
, Llama3-70b, Claude Opus/Sonnet, Mistral, Sarvam-M, Gemma 3n, etc. - Massive context windows (up to 200k tokens) across different models.
- Seamless chat history & knowledge management with stashes and snapshots.
- File import/export (text, Markdown, code, PDF, images via OCR).
- In-chat file inclusion from a local MCP server.
- Built-in web search integration.
- Image generation & OCR pipelines.
- Limit scope of the context window using filter tagging pattern=robodog.py recursive
- AI-driven web automation/testing via Playwright (
/play
). - Raw MCP operations (
/mcp
). -
/todo
feature: automate and track tasks defined intodo.md
. - Accessible, retro βconsoleβ UI with customizable themes and responsive design.
- Web: https://adourish.github.io/robodog/robodog/dist/
- Android: https://play.google.com/store/apps/details?id=com.unclebulgaria.robodog
-
npm packages:
npm install robodoglib
npm install robodogcli
npm install robodog
-
Python:
pip install robodogcli
pip show -f robodogcli
python -m robodogcli.cli --help
-
pip install --upgrade requests tiktoken PyYAML openai playwright pydantic langchain
(optional)
Click the βοΈ icon in the top-menu to open settings, or edit your YAML directly:
configs:
providers:
- provider: openAI
baseUrl: "https://api.openai.com"
apiKey: "<YOUR_OPENAI_KEY>"
- provider: openRouter
baseUrl: "https://openrouter.ai/api/v1"
apiKey: "<YOUR_ROUTER_KEY>"
- provider: searchAPI
baseUrl: "https://google-search74.p.rapidapi.com"
apiKey: "<YOUR_RAPIDAPI_KEY>"
specialists:
- specialist: nlp
resume: natural language processing, content generation
- specialist: gi
resume: image generation from text
- specialist: search
resume: web search integration
mcpServer:
baseUrl: "http://localhost:2500"
apiKey: "testtoken"
models:
- provider: openAI
model: gpt-4
stream: true
specialist: nlp
about: best for reasoning
- provider: openAI
model: o4-mini
stream: true
specialist: nlp
about: 200k token context, advanced reasoning
- provider: openAI
model: dall-e-3
stream: false
specialist: gi
about: image creation
- provider: searchAPI
model: search
stream: false
specialist: search
about: web search results
- gpt-4, gpt-4-turbo, gpt-3.5-turbo, gpt-3.5-turbo-16k, o4-mini, o1
- dall-e-3
- LlamaAI: llama3-70b
- Anthropic: Claude Opus 4, Claude Sonnet 4
- DeepSeek R1
- Mistral Medium 3, Devstral-Small
- Sarvam-M
- Google Gemma 3n E4B
-
Multi-Provider Support: Switch between any configured provider or model on the fly (
/model
). - Chat & Knowledge: Separate panes for Chat History (π) and Knowledge (π)βboth resizable.
-
Stash Management:
-
/stash <name>
β save current chat+knowledge -
/pop <name>
β restore a stash -
/list
β list all stashes
-
-
File Import/Export:
-
/import <glob>
β import files (.md, .js, .py, .pdf, images via OCR) -
/export <file>
β export chat+knowledge snapshot
-
-
MCP File Inclusion:
/include all
/include file=README.md
/include pattern=*.js|*.css recursive
/include dir=src pattern=*.py recursive
-
Raw MCP Operations:
-
/mcp OP [JSON]
β e.g./mcp LIST_FILES
,/mcp READ_FILE {"path":"./foo.py"}
-
-
Web Fetch & Automation:
-
/curl [--no-headless] <url> [<url2>|<js>]
β fetch pages or run JS -
/play <instructions>
β run AI-driven Playwright tests end-to-end
-
-
Web Search:
- Use
search
model or click π to perform live web queries.
- Use
-
Image Generation & OCR: Ask questions to
dall-e-3
or drop an image to extract text via OCR. - Interactive Console UI: Retro βpip-boy greenβ theme, responsive on desktop/mobile, accessible.
- Performance & Size Indicators: Emoji feedback for processing speed and token usage.
-
Extensive Command Palette:
/help
lists all commands, indicators, and settings. -
Todo Automation: Use
/todo
to execute tasks defined intodo.md
across your project roots.
/play navigate to https://example.com, extract the page title, and verify it contains 'Example Domain'
/curl https://example.com
/include pattern=*.js recursive fix bug in parser
/mcp LIST_FILES
/mcp READ_FILE {"path":"./src/cli.py"}
/model o4-mini
/import **/*.md
/export conversation_snapshot.txt
Robodogβs /todo
command scans one or more todo.md
files in your configured project roots, detects tasks marked [ ]
, transitions them to [~]
(Doing) when started, and [x]
(Done) when completed. Each task may include:
-
include:
pattern or file specification to gather relevant knowledge -
focus:
file path where the AI will write or update content - Optional code fences below the task as initial context
You can have multiple todo.md
files anywhere under your roots. /todo
processes the earliest outstanding task, runs the AI with gathered knowledge, updates the focus file, stamps start/completion times, and advances to the next.
# file: project1/todo.md
- [ ] Revise API client
- include: pattern=api/*.js recursive
- focus: file=api/client.js
```knowledge
// existing stub
- [ ] Add unit tests
- include: file=tests/template.spec.js
- focus: file=tests/api.client.spec.js
# file: project2/docs/todo.md
- [ ] Update README
- focus: file=README.md
- [ ] Generate changelog
- include: pattern=CHANGELOG*.md
- focus: file=CHANGELOG.md
```knowledge
# todo readme
- [x] readme
- include: pattern=*robodog*.md|*robodog*.py|*todo.md recursive`
- focus: file=c:\projects\robodog\robodogcli\temp\service.log
```knowledge
1. do not remove any content
2. add a new readme section for the /todo feature with examples of the todo.md files and how you can have as many as possible
3. give lots of exampkes of file formats
# watch
- [ ] change app prints in service logger.INFO
- include: pattern=*robodog*.md|*robodog*.py recursive`
- focus: file=c:\projects\robodog\robodogcli\robodog\service.py
```knowledge
do not remove any features.
give me full drop in code file
# fix logging
- [ ] ask: fix logging. change logging so that it gets log level through command line. change logger so that it takes log level from the command line param
- include: pattern=*robodog*.md|*robodog*.py recursive`
- focus: file=c:\projects\robodog\robodogcli\robodog\cli3.py
```knowledge
my knowledge
You can chain as many tasks and files as needed. Each can reside in different directories, and Robodog will locate all todo.md
files automatically.
See command palette in-app (/help
) or the reference below:
/help β show help
/models β list configured models
/model <name> β switch model
/key <prov> <key> β set API key
/import <glob> β import files into knowledge
/export <file> β export snapshot
/clear β clear chat & knowledge
/stash <name> β stash state
/pop <name> β restore stash
/list β list stashes
/temperature <n> β set temperature
/top_p <n> β set top_p
/max_tokens <n> β set max_tokens
/frequency_penalty <n> β set frequency_penalty
/presence_penalty <n> β set presence_penalty
/stream β enable streaming mode
/rest β disable streaming mode
/folders <dirs> β set MCP roots
/include β¦ β include files via MCP
/curl β¦ β fetch pages / run JS
/play β¦ β AI-driven Playwright tests
/mcp β¦ β invoke raw MCP operation
/todo β run next To Do task
# Clone or unzip robodog
cd robodog
python build.py
open ./dist/robodog.html
npm install robodoglib
npm install robodogcli
npm install robodog
pip install robodogcli
pip show -f robodogcli
python -m robodogcli.cli --help
python -m playwright install
Enjoy Robodog AIβthe future of fast, contextual, and extensible AI interaction!