Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new interpretability techniques #43

Open
afraniomelo opened this issue Sep 10, 2024 · 0 comments
Open

Add new interpretability techniques #43

afraniomelo opened this issue Sep 10, 2024 · 0 comments
Labels
enhancement Improvements to existing functionality

Comments

@afraniomelo
Copy link
Collaborator

afraniomelo commented Sep 10, 2024

How we are today

To improve interaction and adoption by users and operators at process industries, it is important that models are interpretable. At present, the interpretability functionalities provided by BibMon are limited to the sklearnRegressor class and rely solely on feature importances.

Proposed enhancement

We propose the implementation of advanced interpretability techniques such as LIME (local interpretable model-agnostic explanations) (Ribeiro et al., 2016) and SHAP (Shapley additive explanations) (Lundberg and Lee, 2017).

Implementation

Ideally, these functionalities should be implemented in files such as _generic_model.py or _bibmon_tools.py. This approach will ensure that the new interpretability techniques are accessible for all models within the library.

@afraniomelo afraniomelo added the enhancement Improvements to existing functionality label Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvements to existing functionality
Projects
None yet
Development

No branches or pull requests

1 participant