Skip to content
/ hpoglue Public

HPO tool with a modular API that allows for the easy interfacing of a new Optimizer and a new Benchmark

License

Notifications You must be signed in to change notification settings

automl/hpoglue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image image image Ruff

hpoglue

HPO tool with a modular API that allows for the easy interfacing of a new Optimizer and a new Benchmark

Minimal Example to run hpoglue

from hpoglue.run_glue import run_glue
df = run_glue(
    run_name="hpoglue_ex",
    optimizer = ...,
    benchmark = ...,
    seed = 1,
    budget = 50
)

Tip

  • See below for examples of an Optimizer and Benchmark
  • Check this example notebook for more
  • Check out hposuite for some already implemented Optimizers and Benchmarks for hpoglue

Installation

Create a Virtual Environment using Venv

python -m venv hpoglue_env
source hpoglue_env/bin/activate

Installing from PyPI

pip install hpoglue

Tip

  • pip install hpoglue["notebook"] - For usage in a notebook

Installation from source

git clone https://github.com/automl/hpoglue.git
cd hpoglue

pip install -e . # -e for editable install

Example Optimizer Definition

from ConfigSpace import ConfigurationSpace
from hpoglue import Config, Optimizer, Problem, Query
from pathlib import Path


class RandomSearch(Optimizer):
    name = "RandomSearch"
    support = Problem.Support()
    def __init__(
        self,
        problem: Problem,
        working_directory: str | Path,
        seed: int | None = None,
    ):
        """
        Args:
            problem: Source of task information.
            working_directory: TODO
            seed: TODO
        """
        self.config_space = problem.config_space
        self.config_space.seed(seed)
        self.problem = problem
        self._optmizer_unique_id = 0

    def ask(self) -> Query:
        self._optmizer_unique_id += 1
        config = Config(
            config_id=str(self._optmizer_unique_id),
            values=dict(self.config_space.sample_configuration()),
        )
        return Query(config=config, fidelity=None)

    def tell(self, result: Result) -> None:
        # Update the optimizer (not needed for RandomSearch)
        return

Example Benchmark Definition

import numpy as np
from ConfigSpace import ConfigurationSpace
from hpoglue import FunctionalBenchmark, Measure, Result, Query


def ackley_fn(x1: float, x2: float) -> float:
    x = np.array([x1, x2])
    n_var=len(x)
    a=20
    b=1/5
    c=2 * np.pi
    part1 = -1. * a * np.exp(-1. * b * np.sqrt((1. / n_var) * np.sum(x * x)))
    part2 = -1. * np.exp((1. / n_var) * np.sum(np.cos(c * x)))
    out = part1 + part2 + a + np.exp(1)
    return out

def wrapped_ackley(query: Query) -> Result:
    y = ackley_fn(x1=query.config.values["x1"], x2=query.config.values["x2"])
    return Result(query=query, fidelity=None, values={"y": y})

ACKLEY_BENCH = FunctionalBenchmark(
    name="ackley",
    config_space=ConfigurationSpace({"x1": (-32.768, 32.768), "x2": (-32.768, 32.768)}),
    metrics={"y": Measure.metric((0.0, np.inf), minimize=True)},
    query=wrapped_ackley,
)

Run hpoglue on the examples

from hpoglue.run_glue import run_glue
df = run_glue(
    run_name="hpoglue_demo",
    optimizer = RandomSearch,
    benchmark = ACKLEY_BENCH,
    seed = 1,
    budget = 50
)

About

HPO tool with a modular API that allows for the easy interfacing of a new Optimizer and a new Benchmark

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages