Skip to content

Commit

Permalink
Merge pull request #19 from BBVA/develop
Browse files Browse the repository at this point in the history
Notebooks correction
  • Loading branch information
DaniSanchezSantolaya authored Sep 21, 2023
2 parents 69ca4fa + 6e84e85 commit b2f115d
Show file tree
Hide file tree
Showing 5 changed files with 3,321 additions and 1,996 deletions.
1,581 changes: 819 additions & 762 deletions tutorials/BasicTutorial.ipynb

Large diffs are not rendered by default.

849 changes: 849 additions & 0 deletions tutorials/BasicTutorial_colab.ipynb

Large diffs are not rendered by default.

17 changes: 15 additions & 2 deletions tutorials/tutorial_clustering_tree_explainer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,20 @@
"# Clustering Tree Explainer\n",
"\n",
"\n",
"When executing a clustering algorithm like K-Means usually the samples in our dataset are partitioned to K different clusters/groups. However, sometimes is difficult to understand why a sample is assigned to a particular cluster or what are the features that characterize a cluster. To improve interpretability, a small decision tree can be used to partition the data in the assigned clusters of a previously run clustering algorithm. In this tutorial, we show how we can build this decision tree using the `ClusteringTreeExplainer` class. This class builds a decision tree based on the [Iterative Mistake Minimization (IMM)](https://arxiv.org/pdf/2002.12538.pdf) method and [ExKMC: Expanding Explainable k-Means Clustering] (https://arxiv.org/pdf/2006.02399.pdf)\n"
"When executing a clustering algorithm like K-Means usually the samples in our dataset are partitioned to K different clusters/groups. However, sometimes is difficult to understand why a sample is assigned to a particular cluster or what are the features that characterize a cluster. To improve interpretability, a small decision tree can be used to partition the data in the assigned clusters of a previously run clustering algorithm. In this tutorial, we show how we can build this decision tree using the `ClusteringTreeExplainer` class. This class builds a decision tree based on the [Iterative Mistake Minimization (IMM)](https://arxiv.org/pdf/2002.12538.pdf) method and [ExKMC: Expanding Explainable k-Means Clustering](https://arxiv.org/pdf/2006.02399.pdf)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"To execute this tutorial, you need to install mercury-explainability and pyspark (if they are not already installed). You can install them by executing the next command in a cell:\n",
"\n",
"```\n",
"!pip install mercury-explainability pyspark\n",
"```"
]
},
{
Expand Down Expand Up @@ -1295,5 +1308,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
1,114 changes: 687 additions & 427 deletions tutorials/tutorial_counterfactual_basic_explainer.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit b2f115d

Please sign in to comment.