Multi-Fidelity Optimization Benchmark APIs
This repository provides APIs for the MLP benchmark in HPOBench, HPOlib, JAHS-Bench-201, and LCBench in YAHPOGym. Additionally, we provide multi-fidelity version of Hartmann and Branin functions.
Install
As JAHS-Bench-201 and LCBench are surrogate benchmarks and they may conflict your project, we provide separate installation:
# Minimal install
$ pip install mfhpo-benchmark-api
# Minimal install + LCBench
$ pip install mfhpo-benchmark-api[lcbench]
# Minimal install + JAHS-Bench-201
$ pip install mfhpo-benchmark-api[jahs]
# Full install (Minimal + LCBench + JAHS-Bench-201)
$ pip install mfhpo-benchmark-api[full]
Note that each benchmark requires the download of tabular or surrogate data.
For HPOBench and HPOlib, please follow README of this repository.
In the instruction, <YOUR_DATA_PATH>
should be replaced with ~/hpo_benchmarks/hpolib
for HPOlib and ~/hpo_benchmarks/hpobench
for HPOBench.
NOTE If we set an environment variable as follows, we can give some flexibility to the data path:
# In this example, we use datasets in /root/path/to/datasets/hpo_benchmarks/.
$ export BENCHMARK_ROOT_PATH=/root/path/to/datasets/
For JAHS-Bench-201, run the following command:
$ cd ~/hpo_benchmarks/jahs
$ wget https://ml.informatik.uni-freiburg.de/research-artifacts/jahs_bench_201/v1.1.0/assembled_surrogates.tar
# Uncompress assembled_surrogates.tar
For LCBench, access to the website and download lcbench.zip
.
Then you need to unzip the lcbench.zip
in ~/hpo_benchmarks/lcbench
.
Examples
Examples are available in examples. For example, you can test with the following code:
from benchmark_apis import MFBranin
bench = MFBranin()
for i in range(10):
config = bench.config_space.sample_configuration().get_dictionary()
output = bench(eval_config=config, fidels={"z0": 100})
print(output)