This repository includes artifacts for reuse and reproduction of experimental results presented in our ASE'24 paper titled "Root Cause Analysis for Microservices based on Causal Inference: How Far Are We?". Please refer to legacy/README-ASE.md for more information.
Table of Contents
We recommend using machines equipped with at least 8 cores, 16GB RAM, and ~30GB available disk space with Ubuntu 22.04 or Ubuntu 20.04, and Python3.10 or above.
The default
environment, which is used for most methods, can be easily installed as follows. Detailed installation instructions for all methods are in SETUP.md.
Open your terminal and run the following commands
sudo apt update -y
sudo apt install -y build-essential \
libxml2 libxml2-dev zlib1g-dev \
python3-tk graphviz
Clone RCAEval from GitHub
git clone https://github.com/phamquiluan/RCAEval.git && cd RCAEval
Create virtual environment with Python 3.10 (refer SETUP.md to see how to install Python3.10 on Linux)
python3.10 -m venv env
. env/bin/activate
Install RCAEval using pip
pip install pip==20.0.2
pip install -e .[default]
Or, install RCAEval from PyPI
# Install RCAEval from PyPI
pip install pip==20.0.2
pip install RCAEval[default]
Test the installation
python -m pytest tests/test.py::test_basic
Expected output after running the above command (it takes less than 1 minute)
$ pytest tests/test.py::test_basic
============================== test session starts ===============================
platform linux -- Python 3.10.12, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/ubuntu/RCAEval
plugins: dvc-2.57.3, hydra-core-1.3.2
collected 1 item
tests/test.py . [100%]
=============================== 1 passed in 3.16s ================================
The telemetry data must be presented as pandas.DataFrame
. We require the data to have a column named time
that stores the timestep. A sample of valid data could be downloaded using the download_data()
method that we will demonstrate shortly below.
RCAEval stores all the RCA methods in the e2e
module (implemented in RCAEval.e2e
). Available methods are: pc_pagerank, pc_randomwalk, fci_pagerank, fci_randomwalk, granger_pagerank, granger_randomwalk, lingam_pagerank, lingam_randomwalk, fges_pagerank, fges_randomwalk, ntlr_pagerank, ntlr_randomwalk, causalrca, causalai, run, microcause, e_diagnosis, baro, rcd, nsigma, and circa.
A basic example to use BARO, an RCA baseline, to perform RCA are presented as follows,
# You can put the code here to a file named test.py
from RCAEval.e2e import baro
from RCAEval.utility import download_data, read_data
# download a sample data to data.csv
download_data()
# read data from data.csv
data = read_data("data.csv")
anomaly_detected_timestamp = 1692569339
# perform root cause analysis
root_causes = baro(data, anomaly_detected_timestamp)["ranks"]
# print the top 5 root causes
print("Top 5 root causes:", root_causes[:5])
Expected output after running the above code (it takes around 1 minute)
$ python test.py
Downloading data.csv..: 100%|████████████████████| 570k/570k [00:00<00:00, 19.8MiB/s]
Top 5 root causes: ['emailservice_mem', 'recommendationservice_mem', 'cartservice_mem', 'checkoutservice_latency', 'cartservice_latency']
We provide a script named rq2.py
to assist in reproducing the RQ2 results from our paper. This script can be executed using Python with the following syntax:
python rq2.py [-h] [--dataset DATASET] [--method METHOD] [--tdelta TDELTA] [--length LENGTH] [--test]
The available options and their descriptions are as follows:
options:
-h, --help Show this help message and exit
--dataset DATASET Choose a dataset. Valid options:
[online-boutique, sock-shop-1, sock-shop-2, train-ticket,
circa10, circa50, rcd10, rcd50, causil10, causil50]
--method METHOD Choose a method (`pc_pagerank`, `pc_randomwalk`, `fci_pagerank`, `fci_randomwalk`, `granger_pagerank`, `granger_randomwalk`, `lingam_pagerank`, `lingam_randomwalk`, `fges_pagerank`, `fges_randomwalk`, `ntlr_pagerank`, `ntlr_randomwalk`, `causalrca`, `causalai`, `run`, `microcause`, `e_diagnosis`, `baro`, `rcd`, `nsigma`, and `circa`)
--tdelta TDELTA Specify $t_delta$ to simulate delay in anomaly detection (e.g.`--tdelta 60`)
--length LENGTH Specify the length of the time series (used for RQ4)
--test Perform smoke test on certain methods without fully run
For example, in Table 5, BARO [
python rq2.py --dataset online-boutique --method baro
The expected output should be exactly as presented in the paper (it takes less than 1 minute to run the code)
--- Evaluation results ---
Avg@5-CPU: 0.97
Avg@5-MEM: 1.0
Avg@5-DISK: 0.91
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.67
---
Avg speed: 0.07
As presented in Table 5, BARO [
python rq2.py --dataset online-boutique --method baro --tdelta 60
The expected output should be exactly as presented in the paper (it takes less than 1 minute to run the code)
--- Evaluation results ---
Avg@5-CPU: 0.94
Avg@5-MEM: 0.99
Avg@5-DISK: 0.87
Avg@5-DELAY: 0.99
Avg@5-LOSS: 0.6
---
Avg speed: 0.07
We can replace the baro method with other methods (e.g., nsigma, fci_randomwalk) and substitute online-boutique with other datasets to replicate the corresponding results shown in Table 5. This reproduction process is also integrated into our Continuous Integration (CI) setup. For more details, refer to the .github/workflows/reproduce-rq2.yml file.
The running time of each method is recorded in the scripts of RQ1 and RQ2 as we described and presented above.
Our RQ4 relies on the scripts of RQ1 and RQ2 as we described and presented above, with the option --length
.
As presented in Figure 3, BARO maintains stable accuracy on the Online Boutique dataset when we vary the data length from 60 to 600. To reproduce these results, for example, you can run the following Bash script:
# You can put the code here to a file named tmp.sh, and run the script by `bash tmp.sh`
for length in 60 120 180 240 300 360 420 480 540 600; do
python rq2.py --dataset online-boutique --method baro --length $length
done
Expected output after running the above code (it takes few minutes)
The output list presents the Avg@5
scores when we vary the data length. You can see that BARO can maintain a stable performance.
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.64it/s]
--- Evaluation results ---
Avg@5-CPU: 0.84
Avg@5-MEM: 0.98
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 120 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.37it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 180 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.46it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 240 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.47it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 300 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.40it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 360 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.42it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 420 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.53it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 480 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.37it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 540 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.50it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
=== VARY LENGTH: 600 ===
100%|█████████████████████████████████████████████████████████████████████| 125/125 [00:08<00:00, 14.49it/s]
--- Evaluation results ---
Avg@5-CPU: 0.82
Avg@5-MEM: 0.99
Avg@5-DISK: 0.94
Avg@5-DELAY: 0.98
Avg@5-LOSS: 0.66
---
Avg speed: 0.07
Our datasets and their description are publicly available in Zenodo repository with the following information:
- Dataset DOI:
- Dataset URL: https://zenodo.org/records/13305663
We also provide utility functions to download our datasets using Python. The downloaded datasets will be available at directory data
.
from RCAEval.utility import (
download_syn_rcd_dataset,
download_syn_circa_dataset,
download_syn_causil_dataset,
download_rca_circa_dataset,
download_rca_rcd_dataset,
download_online_boutique_dataset,
download_sock_shop_1_dataset,
download_sock_shop_2_dataset,
download_train_ticket_dataset,
)
download_syn_rcd_dataset()
download_syn_circa_dataset()
download_syn_causil_dataset()
download_rca_circa_dataset()
download_rca_rcd_dataset()
download_online_boutique_dataset()
download_sock_shop_1_dataset()
download_sock_shop_2_dataset()
download_train_ticket_dataset()
Expected output after running the above code (it takes few minutes to download our datasets)
$ python test.py
Downloading syn_rcd.zip..: 100%|██████████| 11.2M/11.2M [00:03<00:00, 3.56MiB/s]
Downloading syn_circa.zip..: 100%|████████| 52.5M/52.5M [00:06<00:00, 7.51MiB/s]
Downloading syn_causil.zip..: 100%|███████| 73.8M/73.8M [00:09<00:00, 8.05MiB/s]
Downloading rca_circa.zip..: 100%|████████| 52.7M/52.7M [00:06<00:00, 7.95MiB/s]
Downloading rca_rcd.zip..: 100%|██████████| 22.7M/22.7M [00:04<00:00, 5.16MiB/s]
Downloading online-boutique.zip..: 100%|██| 31.0M/31.0M [00:04<00:00, 6.49MiB/s]
Downloading sock-shop-1.zip..: 100%|██████| 3.54M/3.54M [00:02<00:00, 1.68MiB/s]
Downloading sock-shop-2.zip..: 100%|██████| 79.1M/79.1M [00:09<00:00, 8.38MiB/s]
Downloading train-ticket.zip..: 100%|███████| 280M/280M [00:27<00:00, 10.1MiB/s]
- Our paper can be downloaded at https://dl.acm.org/doi/10.1145/3691620.3695065 or docs/ase-paper.pdf
- Our supplementary material can be downloaded at docs/ase-paper-supplementary-material.pdf
This repository includes code from various sources with different licenses. We have included their corresponding LICENSE into the LICENSES directory:
- BARO: Licensed under the MIT License. Original source: BARO GitHub Repository.
- CIRCA: Licensed under the BSD 3-Clause License. Original source: CIRCA GitHub Repository.
- RCD: Licensed under the MIT License. Original source: RCD GitHub Repository.
- E-Diagnosis: Licensed under the BSD 3-Clause License. Original source: PyRCA GitHub Repository.
- CausalAI: Licensed under the BSD 3-Clause License. Original source: CausalAI GitHub Repository.
- NSigma: Licensed under the BSD 3-Clause License. Original source: Scikit-learn GitHub Repository.
- MicroCause: Licensed under the Apache License 2.0. Original source: MicroCause GitHub Repository.
- PC/FCI/LiNGAM: Licensed under the MIT License. Original source: Causal-learn GitHub Repository.
- Granger: Licensed under the BSD 3-Clause "New" or "Revised" License. Original source: Statsmodels GitHub Repository.
- CausalRCA: No License. Original source: CausalRCA GitHub Repository.
- RUN: No License. Original source: RUN GitHub Repository.
For the code implemented by us and for our datasets, we distribute them under the MIT LICENSE.
We would like to express our sincere gratitude to the researchers and developers who created the baselines used in our study. Their work has been instrumental in making this project possible. We deeply appreciate the time, effort, and expertise that have gone into developing and maintaining these resources. This project would not have been feasible without their contributions.