"Inequality constraints incompatible" Error when fitting a workspace #1884
-
Hi, While trying to fit the pyhf workspace available here, I get the following error: pyhf.exceptions.FailedMinimization: Inequality constraints incompatible It seems this may be due to bins with zero expected events (there are a few in the tails of some distributions), but I am not sure how to diagnose or fix this. If this is the issue, is there a way to protect against or work around this issue (of course it could be that the issue is totally different...) I am using pyhf 0.6.3 with the numpy backend, and a minimal example to reproduce the error is as follows:
The output is in the attached log.txt. Any help to get this example working would be very much appreciated, thanks in advance.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @nicolas-berger-github, I investigated the workspace and believe the issue is ultimately due to a few spec = {
"channels": [
{
"name": "os2l_ge8j3bL_SR__BDT_opt4__full",
"samples": [
{
"data": [
0.11121033468264852
],
"modifiers": [
{
"data": {
"hi": 2.70109,
"lo": 0.0
},
"name": "singletop_PS",
"type": "normsys"
}
],
"name": "singletop"
}
]
}
],
"measurements": [
{
"config": {
"parameters": [],
"poi": ""
},
"name": "SM4t_1LOS_McBased_212750_v3p12"
}
],
"observations": [
{
"data": [
35.0
],
"name": "os2l_ge8j3bL_SR__BDT_opt4__full"
}
],
"version": "1.0.0"
}
import pyhf
model = pyhf.Workspace(spec).model()
print(model.expected_data(model.config.suggested_init())) which returns [...]/pyhf/src/pyhf/tensor/numpy_backend.py:259: RuntimeWarning: divide by zero encountered in log
return np.log(tensor_in)
[...]/pyhf/src/pyhf/interpolators/code4.py:118: RuntimeWarning: invalid value encountered in multiply
-default_backend.log(self._deltas_dn) * deltas_dn_alpha0,
[...]/pyhf/src/pyhf/interpolators/code4.py:121: RuntimeWarning: invalid value encountered in multiply
default_backend.power(default_backend.log(self._deltas_dn), 2)
[nan 0.] where As a workaround, the following script replaces those instances of zeros with import json
with open("workspace_Comb.json") as f:
spec = json.load(f)
for ch in spec["channels"]:
for sam in ch["samples"]:
for mod in sam["modifiers"]:
if mod["type"] != "normsys":
continue
if mod["data"]["lo"] == 0.0:
print(ch["name"], sam["name"], mod["name"])
# set new value
mod["data"]["lo"] = 0.001
with open("workspace_Comb_fixed.json", "w") as f:
f.write(json.dumps(spec, sort_keys=True, indent=4)) It also lists the problematic channels, samples and modifiers:
When using the new workspace version I also converted the original workspace to ROOT and ran the fit there, which worked without issues. Perhaps there is some additional protection used in ROOT? From a brief look at it, #1845 does not seem to address this specific cause of NaNs in |
Beta Was this translation helpful? Give feedback.
Hi @nicolas-berger-github, I investigated the workspace and believe the issue is ultimately due to a few
normsys
modifiers being ill-defined: the multiplier in "down" direction is set to 0.0, but a strictly positive number is needed for the interpolation / extrapolation to work. Subsequently, the model prediction becomes NaN and the fit fails. A minimal example extracted from the workspace provided is the following (pruned out other channels, samples, modifiers and bins):