Weco systematically optimizes your code, guided directly by your evaluation metrics.
Example applications include:
-
GPU Kernel Optimization: Reimplement PyTorch functions using CUDA or Triton, optimizing for
latency
,throughput
, ormemory_bandwidth
. -
Model Development: Tune feature transformations, architectures or the whole training pipeline, optimizing for
validation_accuracy
,AUC
, orSharpe Ratio
. -
Prompt Engineering: Refine prompts for LLMs (e.g., for math problems), optimizing for
win_rate
,relevance
, orformat_adherence
The weco
CLI leverages a tree search approach guided by LLMs to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
-
Install the Package:
pip install weco
-
Authenticate (Required):
weco
now uses a credit-based billing system with centralized LLM access. You need to authenticate to use the service:-
Run the CLI:
weco
will prompt you to authenticate via your web browser - Free Credits: New users receive free credits upon signup
- Centralized Keys: All LLM provider API keys are managed by Weco (no BYOK required)
- Credit Top-ups: Purchase additional credits through the dashboard at dashboard.weco.ai
-
Run the CLI:
The easiest way to get started with Weco is to use the interactive copilot. Simply navigate to your project directory and run:
weco
Or specify a project path:
weco /path/to/your/project
This launches Weco's interactive copilot that will:
- Analyze your codebase using AI to understand your project structure and identify optimization opportunities
- Suggest specific optimizations tailored to your code (e.g., GPU kernel optimization, model improvements, prompt engineering)
- Generate evaluation scripts automatically or help you configure existing ones
- Set up the complete optimization pipeline with appropriate metrics and commands
- Run the optimization or provide you with the exact command to execute
weco
directly modifies the file specified by --source
during the optimization process. It is strongly recommended to use version control (like Git) to track changes and revert if needed. Alternatively, ensure you have a backup of your original file before running the command. Upon completion, the file will contain the best-performing version of the code found during the run.
Configure optimization parameters yourself - If you need precise control over the optimization parameters, you can use the direct weco run
command:
Example: Optimizing Simple PyTorch Operations
# Navigate to the example directory
cd examples/hello-kernel-world
# Install dependencies
pip install torch
# Run Weco with manual configuration
weco run --source optimize.py \
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
--metric speedup \
--goal maximize \
--steps 15 \
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
Note: If you have an NVIDIA GPU, change the device in the --eval-command
to cuda
. If you are running this on Apple Silicon, set it to mps
.
For more advanced examples, including Triton, CUDA kernel optimization, ML model optimization, and prompt engineering for math problems, please see the README.md
files within the corresponding subdirectories under the examples/
folder.
Note: When recommend removing any backticks from your code if any are present. We currently don't support backticks but will support this in the future.
Required:
Argument | Description | Example |
---|---|---|
-s, --source |
Path to the source code file that will be optimized. | -s model.py |
-c, --eval-command |
Command to run for evaluating the code in --source . This command should print the target --metric and its value to the terminal (stdout/stderr). See note below. |
-c "python eval.py" |
-m, --metric |
The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name does not need to match what's printed by your --eval-command exactly (e.g., its okay to use "speedup" instead of "Speedup:"). |
-m speedup |
-g, --goal |
maximize /max to maximize the --metric or minimize /min to minimize it. |
-g maximize |
Optional:
Argument | Description | Default | Example |
---|---|---|---|
-n, --steps |
Number of optimization steps (LLM iterations) to run. | 100 | -n 50 |
-M, --model |
Model identifier for the LLM to use (e.g., o4-mini , claude-sonnet-4-0 ). |
o4-mini |
-M o4-mini |
-i, --additional-instructions |
Natural language description of specific instructions or path to a file containing detailed instructions to guide the LLM. Supported file formats include - .txt , .md , and .rst . |
None |
-i instructions.md or -i "Optimize the model for faster inference"
|
-l, --log-dir |
Path to the directory to log intermediate steps and final optimization result. | .runs/ |
-l ./logs/ |
--eval-timeout |
Timeout in seconds for each step in evaluation. | No timeout (unlimited) | --eval-timeout 3600 |
--save-logs |
Save execution output from each optimization step to disk. Creates timestamped directories with raw output files and a JSONL index for tracking execution history. | False |
--save-logs |
The CLI requires a Weco account for authentication and billing.
Weco now requires authentication for all operations. This enables our credit-based billing system and provides access to powerful optimizations:
-
During onboarding: When you run
weco
for the first time, you'll be prompted to log in -
Manual login: Use
weco logout
to clear credentials, then runweco
again to re-authenticate - Device flow: Weco will open your browser automatically and guide you through a secure OAuth-style authentication
Benefits:
- No API Key Management: All LLM provider keys are managed centrally
- Cost Transparency: See exactly how many credits each optimization consumes
- Free Trial: Free credits to get started with optimization projects
- Run History: View all your optimization runs on the Weco dashboard
- Progress Tracking: Monitor long-running optimizations remotely
- Budget Control: Set spending limits and auto top-up preferences
Command | Description | When to Use |
---|---|---|
weco |
Launch interactive onboarding | Recommended for beginners - Analyzes your codebase and guides you through setup |
weco /path/to/project |
Launch onboarding for specific project | When working with a project in a different directory |
weco run [options] |
Direct optimization execution | For advanced users - When you know exactly what to optimize and how |
weco resume <run-id> |
Resume an interrupted run | Continue from the last completed step |
weco logout |
Clear authentication credentials | To switch accounts or troubleshoot authentication issues |
You can specify which LLM model to use with the -M
or --model
flag:
# Use with onboarding
weco --model gpt-4o
# Use with direct execution
weco run --model claude-3.5-sonnet --source optimize.py [other options...]
Available models:
-
o4-mini
,o3-mini
,gpt-4o
(OpenAI models) -
claude-sonnet-4-0
,claude-opus-4-0
(Anthropic models) -
gemini-2.5-pro
,gemini-2.5-flash
(Google models)
All models are available through Weco's centralized system. If no model is specified, Weco automatically selects the best model for your optimization task.
If your optimization run is interrupted (network issues, restart, etc.), resume from the most recent node:
# Resume an interrupted run
weco resume 0002e071-1b67-411f-a514-36947f0c4b31
Arguments for weco resume
:
Argument | Description | Example |
---|---|---|
run-id |
The UUID of the run to resume (shown at the start of each run) | 0002e071-1b67-411f-a514-36947f0c4b31 |
Notes:
- Works only for interrupted runs (status:
error
,terminated
, etc.). - You’ll be prompted to confirm that your evaluation environment (source file + evaluation command) hasn’t changed.
- The source file is restored to the most recent solution before continuing.
- All progress and metrics from the original run are preserved.
- Log directory, save-logs behavior, and evaluation timeout are reused from the original run.
Weco, powered by the AIDE algorithm, optimizes code iteratively based on your evaluation results. Achieving significant improvements, especially on complex research-level tasks, often requires substantial exploration time.
The following plot from the independent Research Engineering Benchmark (RE-Bench) report shows the performance of AIDE (the algorithm behind Weco) on challenging ML research engineering tasks over different time budgets.
As shown, AIDE demonstrates strong performance gains over time, surpassing lower human expert percentiles within hours and continuing to improve. This highlights the potential of evaluation-driven optimization but also indicates that reaching high levels of performance comparable to human experts on difficult benchmarks can take considerable time (tens of hours in this specific benchmark, corresponding to many --steps
in the Weco CLI). Factor this into your planning when setting the number of --steps
for your optimization runs.
When using the --save-logs
flag, Weco saves the execution output from each optimization step to help with debugging and analysis. The logs are organized as follows:
.runs/
└── <source-file-name>/
└── <run-uuid>/
├── exec_output.jsonl # Index file with metadata for each step
├── outputs/
│ ├── step_0.out.txt # Raw output from initial evaluation
│ ├── step_1.out.txt # Raw output from step 1
│ ├── step_2.out.txt # Raw output from step 2
│ └── ...
├── step_0.py # Code snapshot from initial evaluation
├── step_1.py # Code snapshot from step 1
├── step_2.py # Code snapshot from step 2
└── ...
Each run is organized under the source file name (e.g., spaceship-titanic
for spaceship-titanic.py
) and a unique UUID. The outputs/
directory and exec_output.jsonl
file are only created when the --save-logs
flag is used.
The exec_output.jsonl
file contains one JSON object per line with:
-
step
: The optimization step number -
timestamp
: When the execution occurred -
output_file
: Relative path to the full output file -
output_length
: Total length of the output
This is particularly useful for:
- Debugging why certain optimizations fail
- Analyzing patterns in evaluation results
- Keeping records of long-running optimization sessions
- Troubleshooting evaluation script issues
The command specified by --eval-command
is crucial. It's responsible for executing the potentially modified code from --source
and assessing its performance. This command MUST print the metric you specified with --metric
along with its numerical value to the terminal (standard output or standard error). Weco reads this output to understand how well each code version performs and guide the optimization process.
For example, if you set --metric speedup
, your evaluation script (eval.py
in the examples) should output a line like:
speedup: 1.5
or
Final speedup value = 1.5
Weco will parse this output to extract the numerical value (1.5 in this case) associated with the metric name ('speedup').
Note on Output Truncation: When evaluation output exceeds 51,000 characters, Weco truncates it to show the first 25,000 and last 25,000 characters. For best results, ensure your evaluation script prints the metric value near the end of its output.
A list of models we support can be found in our documentation here.
We welcome contributions! Please see contributing.md for detailed guidelines on how to contribute to this project.