-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shap loss values #347
Comments
I agree that this would be of interest. I think SAGE is the method you are looking for here. Take a look at this nice presentation: https://iancovert.com/blog/understanding-shap-sage/ |
Interesting! From what I understand, SAGE seems like it would be relatively easy to implement with the current code.. mainly altering the |
Hi @martinju and @rawanmahdi, I find this interesting and I would like to know if there are any updates on this? I would like to see the contribution of each feature to the loss of an MLPRegressor model. I am quite new to this field, so I would like to know more. My current understanding is that only models with single output variables are supported now i.e., y = f(x1, x2, x3,...xn). In my case, the output variables are more than size 1 [e.g, (y1,y2) = f(f(x1, x2, x3,...xn)]. So, I am trying to avoid going into the issue discussed in #323. Please correct me if I am wrong here. Is it possible to pass a user-defined python function |
Sorry for the very late reply.
|
For debugging black box models, it would be nice to get shapley feature importance values as they relate to the loss of the model rather than the prediction. I've seen this implemeted by the original makers of SHAP, using TreeExplainer, with the assumption that features are independant. This Medium article goes into more depth about the implementation.
I'm wondering, would it be possible to obtain model agnostic shap loss values on dependant features, similar to how shapr does so for the predictions?
The text was updated successfully, but these errors were encountered: