-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shgo stuck searching at x_min, x_max bounds #33
Comments
shgo
stuck searching at x_min
, x_max
bounds
A 15 dimensional problem is too high for the current version of shgo (and deterministic black-box solvers in general). The solution would've been to switch to Sobol sampling (as the default is now triangulating the initial search space which requires solving the problem for There is a newer version of shgo for which a 15 dimensional problem with symmetries is solvable, but this is still a work in progress to merge the sampling libraries with this library in a stable commit.
This is not an issue at all for shgo, it is proven to converge when there are discontinuities, but a non-continuous optimisation problem of this size requires more computational resources ( |
Thanks for the speedy reply. Having reduced the number of dimensions of the optimization vector
Reading the |
Yes, this is a normal termination, it is simply due to the specified iterations running out, you can increase the number of iterations by specifying the The first thing I would do is to increase the number of sampling points in the first iteration, depending on the amount of computational resources that you have available. The snipped code below uses the 'sobol' sampling method as the current version of the options = {'disp': True}
shgo(
objective_function,
bounds,
sampling_method='sobol'
n=10000,
iters=1,
)
To understand why these are the default settings note that by default black box optimisation problems have no stopping criteria (not even theoretical), therefore, without more information a black box optimization would always run forever in search of a better point (in literature toy problems the lowest objective function value is known). In practical problems usually the hyperparameters are experimented with (the In addition, if you are working with for example error functions where is known that the objective function is bounded by at least zero. You can add a minimum objective function value as follows: options = {'f_min':1e-5, # Replace 1e-5 or 0.0 with an error value that is "good enough" for your application
'disp': True,
'minimize_every_iter': True }
optimize.shgo(
self.objective_function,
self.bounds,
callback=self.callback_function_SHGO,
sampling_method='sobol',
options = options
) Which should run indefinitely until a good enough point is found. Note that in most black box literature studies there is also a tolerance of around 2% added to the known minima. In most applications the global minimum of a least square error function is not zero so this is not an overly helpful criteria. The other main intended stopping criteria for shgo is Both documentation and adequate beta testing are currently still lacking in this project. Finally some additional considerations:
|
I'm trying to use
shgo
to tune the gains of a control algorithm in a black-box fashion, by iteratively running the controller in a simulation (each simulation only takes about 20 seconds to complete) and computing a cost from the simulation results, which can then be used as the objective functionf
forshgo
. However, having leftshgo
running for the past few hours, I'm not seeing any progress being made. What I have noticed is the following:shgo
seems to be stuck switching the values in the optimization parameter vectorx
between the boundsx_min
andx_max
that were given to it, and doesn't seem to have done any searching within the bounds themselves.In the first few minutes of running
shgo
, it iterates quickly through each simulation run, however, as time goes on it slows down substantially, spending minutes between each iteration. What isshgo
doing in the minutes between simulation runs, is there a way to produce verbose output to investigate further?Due to the strange results seen above, there could be two likely culprits that initially come to mind. The first is that my optimization vector
x
is of dimension15
, and after reading theshgo
documentation it looks like anything above10
dimensions will be a struggle forshgo
to solve quickly (due to symmetries in the system dynamics, I can likely get this number below10
though). The second potential problem is that the objective function (i.e. the cost output from the simulation) can returninf
(orsys.float_info.max
) if the controller causes the system that it is controlling to become unstable and have its states blow up, canshgo
deal with objective functions that returninf
?The text was updated successfully, but these errors were encountered: