You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To improve interaction and adoption by users and operators at process industries, it is important that models are interpretable. At present, the interpretability functionalities provided by BibMon are limited to the sklearnRegressor class and rely solely on feature importances.
Proposed enhancement
We propose the implementation of advanced interpretability techniques such as LIME (local interpretable model-agnostic explanations) (Ribeiro et al., 2016) and SHAP (Shapley additive explanations) (Lundberg and Lee, 2017).
Implementation
Ideally, these functionalities should be implemented in files such as _generic_model.py or _bibmon_tools.py. This approach will ensure that the new interpretability techniques are accessible for all models within the library.
The text was updated successfully, but these errors were encountered:
How we are today
To improve interaction and adoption by users and operators at process industries, it is important that models are interpretable. At present, the interpretability functionalities provided by BibMon are limited to the
sklearnRegressor
class and rely solely on feature importances.Proposed enhancement
We propose the implementation of advanced interpretability techniques such as LIME (local interpretable model-agnostic explanations) (Ribeiro et al., 2016) and SHAP (Shapley additive explanations) (Lundberg and Lee, 2017).
Implementation
Ideally, these functionalities should be implemented in files such as
_generic_model.py
or_bibmon_tools.py
. This approach will ensure that the new interpretability techniques are accessible for all models within the library.The text was updated successfully, but these errors were encountered: