Releases: SimonBlanke/Hyperactive
Releases · SimonBlanke/Hyperactive
v4.8.0
v4.7.0
- add Genetic algorithm optimizer
- add Differential evolution optimizer
v4.6.0
add support for constrained optimization
v4.5.0
- add early stopping feature to custom optimization strategies
- display additional outputs from objective-function in results in command-line
- add type hints to hyperactive-api
- add tests for new features
- add test for verbosity=False
v4.4.0
- add new feature: "optimization strategies"
- redesign progress-bar
v4.3.0
v4.0.0
v4.0.0
v3.2.4
Changes from v3.0.0 -> v3.2.4:
- Decouple number of runs from active processes (Thanks to PartiallyTyped). This reduces memory load if number of jobs is huge
- New feature: The progress board enables the user to monitor the optimization progress during the run.
- Display trend of best score
- Plot parameters and score in parallel coordinates
- Generate filter file to define an upper and/or lower bound for all parameters and the score in the parallel coordinate plot
- List parameters of 5 best scores
- add Python 3.8 to tests
- add warnings of search space values does not contain lists
- improve stability of result-methods
- add tests for hyperactive-memory + search spaces
v2.3.0
- add Tree-structured optimization algorithm (idea from Hyperopt)
- add Decision-tree optimization algorithm (idea from sklearn)
- enable new optimization parameters for bayes-opt:
- max_sample_size: maximum number of samples for the gaussian-process-reg to train on. Sampling done by random choice.
- skip_retrain: skips the retraining of the gaussian-process-reg sometimes during the optimization run. Basically returns multiple predictions for next output (which should be apart from another)
v2.1.0
- first stable implementation of "long-term-memory" to save/load search positions/parameter and results.
- enable warm start of sequence based optimizers (bayesian opt, ...) with results from "long-term-memory"
- enable the usage of other gaussian-process-regressors than from sklearn. GPR-class (from gpy, GPflow, ...) can be passed to "optimizer"-kwarg