hyperactive

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.


Keywords
visualization, data-science, automated-machine-learning, bayesian-optimization, deep-learning, feature-engineering, hyperactive, hyperparameter-optimization, keras, machine-learning, model-selection, neural-architecture-search, optimization, parallel-computing, parameter-tuning, python, pytorch, scikit-learn, xgboost
License
Other
Install
pip install hyperactive==0.4.1.7

Documentation

Welcome to hyperactive

A unified interface for optimization algorithms and problems.

Hyperactive implements a collection of optimization algorithms, accessible through a unified experiment-based interface that separates optimization problems from algorithms. The library provides native implementations of algorithms from the Gradient-Free-Optimizers package alongside direct interfaces to Optuna and scikit-learn optimizers, supporting discrete, continuous, and mixed parameter spaces.



Open Source License: MIT GC.OS Sponsored
Community Discord LinkedIn
CI/CD github-actions readthedocs
Code !pypi !python-versions !black

Installation

pip install hyperactive

⚡ Quickstart

Maximizing a custom function

import numpy as np

# function to be maximized
def problem(params):
    x = params["x"]
    y = params["y"]

    return -(x**2 + y**2)

# discrete search space: dict of iterable, scikit-learn like grid space
# (valid search space types depends on optimizer)
search_space = {
    "x": np.arange(-1, 1, 0.01),
    "y": np.arange(-1, 2, 0.1),
}

from hyperactive.opt.gfo import HillClimbing

hillclimbing = HillClimbing(
    search_space=search_space,
    n_iter=100,
    experiment=problem,
)

# running the hill climbing search:
best_params = hillclimbing.solve()

experiment abstraction - example: scikit-learn CV experiment

"experiment" abstraction = parametrized optimization problem

hyperactive provides a number of common experiments, e.g., scikit-learn cross-validation experiments:

import numpy as np
from hyperactive.experiment.integrations import SklearnCvExperiment
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold

X, y = load_iris(return_X_y=True)

# create experiment
sklearn_exp = SklearnCvExperiment(
    estimator=SVC(),
    scoring=accuracy_score,
    cv=KFold(n_splits=3, shuffle=True),
    X=X,
    y=y,
)

# experiments can be evaluated via "score"
params = {"C": 1.0, "kernel": "linear"}
score, add_info = sklearn_exp.score(params)

# they can be used in optimizers like above
from hyperactive.opt.gfo import HillClimbing

search_space = {
    "C": np.logspace(-2, 2, num=10),
    "kernel": ["linear", "rbf"],
}

hillclimbing = HillClimbing(
    search_space=search_space,
    n_iter=100,
    experiment=sklearn_exp,
)

best_params = hillclimbing.solve()

full ML toolbox integration - example: scikit-learn

Any hyperactive optimizer can be combined with the ML toolbox integrations!

OptCV for tuning scikit-learn estimators with any hyperactive optimizer:

# 1. defining the tuned estimator:
from sklearn.svm import SVC
from hyperactive.integrations.sklearn import OptCV
from hyperactive.opt.gfo import HillClimbing

search_space = {"kernel": ["linear", "rbf"], "C": [1, 10]}
optimizer = HillClimbing(search_space=search_space, n_iter=20)
tuned_svc = OptCV(SVC(), optimizer)

# 2. fitting the tuned estimator:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

tuned_svc.fit(X_train, y_train)

y_pred = tuned_svc.predict(X_test)

# 3. obtaining best parameters and best estimator
best_params = tuned_svc.best_params_
best_estimator = tuned_svc.best_estimator_

💡 Key Concepts

Experiment-Based Architecture

Hyperactive v5 introduces a clean separation between optimization algorithms and optimization problems through the experiment abstraction:

  • Experiments define what to optimize (the objective function and evaluation logic)
  • Optimizers define how to optimize (the search strategy and algorithm)

This design allows you to:

  • Mix and match any optimizer with any experiment type
  • Create reusable experiment definitions for common ML tasks
  • Easily switch between different optimization strategies
  • Build complex optimization workflows with consistent interfaces

Built-in experiments include:

  • SklearnCvExperiment - Cross-validation for sklearn estimators
  • SktimeForecastingExperiment - Time series forecasting optimization
  • Custom function experiments (pass any callable as experiment)

Overview

Hyperactive features a collection of optimization algorithms that can be used for a variety of optimization problems. The following table shows examples of its capabilities:


Optimization Techniques Tested and Supported Packages Optimization Applications
Local Search:
Global Search:
Population Methods:
Sequential Methods:
Optuna Backend:
Machine Learning:
  </td>
</tr>

The examples above are not necessarily done with realistic datasets or training procedures. The purpose is fast execution of the solution proposal and giving the user ideas for interesting usecases.


Sideprojects and Tools

The following packages are designed to support Hyperactive and expand its use cases.

Package Description
Search-Data-Collector Simple tool to save search-data during or after the optimization run into csv-files.
Search-Data-Explorer Visualize search-data with plotly inside a streamlit dashboard.

FAQ

Known Errors + Solutions

Read this before opening a bug-issue
  • Are you sure the bug is located in Hyperactive?

    The error might be located in the optimization-backend. Look at the error message from the command line. If one of the last messages look like this:

    • File "/.../gradient_free_optimizers/...", line ...

    Then you should post the bug report in:


    Otherwise you can post the bug report in Hyperactive

  • Do you have the correct Hyperactive version?

    Every major version update (e.g. v2.2 -> v3.0) the API of Hyperactive changes. Check which version of Hyperactive you have. If your major version is older you have two options:

    Recommended: You could just update your Hyperactive version with:

    pip install hyperactive --upgrade

    This way you can use all the new documentation and examples from the current repository.

    Or you could continue using the old version and use an old repository branch as documentation. You can do that by selecting the corresponding branch. (top right of the repository. The default is "main") So if your major version is older (e.g. v2.1.0) you can select the 2.x.x branch to get the old repository for that version.

  • Provide example code for error reproduction To understand and fix the issue I need an example code to reproduce the error. I must be able to just copy the code into a py-file and execute it to reproduce the error.

MemoryError: Unable to allocate ... for an array with shape (...)

This is expected of the current implementation of smb-optimizers. For all Sequential model based algorithms you have to keep your eyes on the search space size:

search_space_size = 1
for value_ in search_space.values():
    search_space_size *= len(value_)

print("search_space_size", search_space_size)

Reduce the search space size to resolve this error.

TypeError: cannot pickle '_thread.RLock' object

This typically means your search space or parameter suggestions include non-serializable objects (e.g., classes, bound methods, lambdas, local functions, locks). Ensure that all values in search_space/param_space are plain Python/scientific types such as ints, floats, strings, lists/tuples, or numpy arrays. Avoid closures and non-top-level callables in parameter values.

Hyperactive v5 does not expose a global “distribution” switch. If you parallelize outside Hyperactive (e.g., with joblib/dask/ray), choose an appropriate backend and make sure the objective and arguments are picklable for process-based backends.

Command line full of warnings

Very often warnings from sklearn or numpy. Those warnings do not correlate with bad performance from Hyperactive. Your code will most likely run fine. Those warnings are very difficult to silence.

It should help to put this at the very top of your script:

def warn(*args, **kwargs):
    pass


import warnings

warnings.warn = warn
Warning: Not enough initial positions for population size

This warning occurs because the optimizer needs more initial positions to generate a population for the search. In v5, initial positions are controlled via the optimizer’s initialize parameter.

# This is how it looks per default
initialize = {"grid": 4, "random": 2, "vertices": 4}

# You could set it to this for a maximum population of 20
initialize = {"grid": 4, "random": 12, "vertices": 4}

References


Citing Hyperactive

@Misc{hyperactive2021,
  author =   {{Simon Blanke}},
  title =    {{Hyperactive}: An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.},
  howpublished = {\url{https://github.com/SimonBlanke}},
  year = {since 2019}
}

License

LICENSE