Minuit Optimization failed. Estimated distance to minimum too large. #1781
-
Dear pyhf community, Codepyhf.set_backend("numpy",pyhf.optimize.minuit_optimizer(verbose = True, tolerance = 10000, strategy = 0))
def fitresults(constraints=None):
constraints = constraints or []
init_pars = model.config.suggested_init()
fixed_params = model.config.suggested_fixed()
for idx, fixed_val in constraints:
init_pars[idx] = fixed_val
fixed_params[idx] = True
result, result_obj = pyhf.infer.mle.fit(
data,
model,
maxiter=1000000,
init_pars=init_pars,
fixed_params=fixed_params,
return_uncertainties=True,
return_result_obj=True
)
bestfit = result[:,0]
errors = result[:,1]
return bestfit, errors, result_obj.corr Errorresult, result_obj = pyhf.infer.mle.fit(
File "/afs/cern.ch/user/s/ssaha/.local/lib/python3.8/site-packages/pyhf/infer/mle.py", line 131, in fit
return opt.minimize(
File "/afs/cern.ch/user/s/ssaha/.local/lib/python3.8/site-packages/pyhf/optimize/mixins.py", line 184, in minimize
result = self._internal_minimize(
File "/afs/cern.ch/user/s/ssaha/.local/lib/python3.8/site-packages/pyhf/optimize/mixins.py", line 64, in _internal_minimize
raise exceptions.FailedMinimization(result)
pyhf.exceptions.FailedMinimization: Optimization failed. Estimated distance to minimum too large. Is there a way to debug this in a better way ? I can provide the full script if needed. Best, |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
@ssaha1234 A question about your environment so that we know what versions we're dealing with here. I see from your traceback that you're on LXPLUS at CERN but that you're also not in a Python virtual environment. I would highly encourage you to do everything you can to only work inside of a Python virtual environment where you install the packages yourself, else you can run into lots of problems. For example, I assume here that you have an LCG view setup as well? If so, then it is very hard to know what versions of libraries you actually have to work with. Can you tell us what your environment is like? A good start would be the output of $ python -m pip list | grep 'pyhf\|scipy\|numpy\|iminuit' Aside: If you're trying to work with LCG views on a CVMFS system it can be very challenging to work with virtual environments, but it is possible! I've started this toy implementation of a little project that abstracts away some of the difficulty here so you can use a Python virtual environment almost like you normally would regardless of the fact that you're inside of an LCG view: https://github.com/matthewfeickert/cvmfs-venv. If you try it out and have questions / run into problems please ping me on its GitHub Issues. |
Beta Was this translation helpful? Give feedback.
-
Hi @ssaha1234, the most common cause for this in my experience is that your fit model lacks relevant degrees of freedom to model the data you are fitting.
This also points towards the explanation: by sufficiently changing numbers there you can essentially make any fit fail, because the observation will end up as something that cannot be described by the model anymore. I would suggest plotting your distributions to see how close your observations are to the model. When also visualizing model uncertainties, you can roughly judge whether some observation may be so far away that a fit might fail. To visualize, you can use |
Beta Was this translation helpful? Give feedback.
Hi @ssaha1234, the most common cause for this in my experience is that your fit model lacks relevant degrees of freedom to model the data you are fitting.
This also points towards the explanation: by sufficiently changing numbers there you can essentially make any fit fail, because the observation will end up as something that cannot be described by the model anymore.
I would suggest plotting your distributions to see how close your observations are to the model. When also visualizing model uncertainties, you can roughly judge whether some observation may be so far away that a fit might fail. To visualize, y…