-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding debug mode to inferer #248
Conversation
wait i don't understand what it means any changes to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice way to weave bayeux
into the MechanisticInferer
class.
My request is to add a test which checks if we would have picked up the issue we had when abstol
/reltol
wasn't aggressive enough, e.g. would this debug mode have captured potential problems without having to run chains etc.
@SamuelBrand1 added a test and passed kwargs so that _debug_inferer does not need to be updated every time likelihood is changed |
Looks good. Running it locally give:
Should I be interested in the fails here? |
to be honest I am not sure. @kokbent @tjhladish I am curious your thoughts |
Upon reflection whether the specific test model has debug fails is not in scope for this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handy extra functionality. If the debug fails on the test version of the model are relevant is another issue.
as mentioned in #244 it is helpful to have sanity checks on our likelihood functions. This function added to the
mechanistic_inferer
shows off a basic example of how to implement such a sanity check. Any changes toself.likelihood
must also be added to the_debug_likelihood
function.Due to the layout of how
self.likelihood
function we can not just pass it alone intobx_model.mcmc.numpyro_nuts.debug
so overrides are needed separately.