Skip to content

Commit

Permalink
Merge pull request #13 from ExamonHPC/release/v0.3.0
Browse files Browse the repository at this point in the history
Release/v0.3.0
  • Loading branch information
fbeneventi authored Dec 22, 2024
2 parents 1963666 + 1030085 commit a4f83e5
Show file tree
Hide file tree
Showing 57 changed files with 9,935 additions and 4 deletions.
2 changes: 2 additions & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
* @fbeneventi
/.github/ @fbeneventi
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@
.ipynb_*
examon-cache/
examon-cache/*

build
site
88 changes: 88 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Contributing to ExaMon

First off, thank you for considering contributing to our project!

## How Can I Contribute?

### Reporting Bugs

Before creating bug reports, please check the issue list as you might find out that you don't need to create one. When you are creating a bug report, please include as many details as possible:

* Use a clear and descriptive title
* Describe the exact steps which reproduce the problem
* Provide specific examples to demonstrate the steps
* Describe the behavior you observed after following the steps
* Explain which behavior you expected to see instead and why
* Include screenshots if possible

### Suggesting Enhancements

If you have a suggestion for the project, we'd love to hear about it. Please include:

* A clear and detailed explanation of the feature
* The motivation behind this feature
* Any alternative solutions you've considered
* If applicable, examples from other projects

### Pull Request Process

1. Fork the repository and create your branch from `master`
2. If you've added code that should be tested, add tests
3. Ensure the test suite passes
4. Update the documentation if needed
5. Issue that pull request!

#### Pull Request Guidelines

* Follow our coding standards (see below)
* Include relevant issue numbers in your PR description
* Update the README.md with details of changes if applicable
* The PR must pass all CI/CD checks [TBD]
* Wait for review from maintainers

### Development Setup

1. Fork and clone the repo
3. Create a branch: `git checkout -b my-branch-name`

### Coding Standards

* Use consistent code formatting
* Write clear commit messages following [Conventional Commits](https://www.conventionalcommits.org/)
* Comment your code where necessary
* Write tests for new features
* Keep the code simple and maintainable

### Commit Messages

We follow a basic specification:

```
type(scope): description
[optional body]
[optional footer]
```

The type should be one of the following:

| Type | Description |
|------|-------------|
| add | Introduces a new feature or functionality |
| fix | Patches a bug or resolves an issue |
| change | Modifies existing functionality or behavior |
| remove | Deletes or deprecates functionality |
| merge | Combines branches or resolves conflicts |
| doc | Updates documentation or comments |


### First Time Contributors

Looking for work? Check out our issues labeled `good first issue` or `help wanted`.

## License

By contributing, you agree that your contributions will be licensed under the same license that covers the project.

## Questions?

Don't hesitate to contact the project maintainers if you have any questions!
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ git clone https://github.com/ExamonHPC/examon.git
Once you have the above setup, you need to create the Docker services:

```bash
docker-compose up -d
docker compose up -d
```

This will build the Docker images and fetch some prebuilt images and then start the services. You can refer to the `docker-compose.yml` file to see the full configuration.
Expand All @@ -62,14 +62,14 @@ Fill out the form with the following settings:
### Collecting data using the dummy "examon_pub" plugin
Once all Docker services are running (can be started either by `docker-compose up -d` or `docker-compose start`), the MQTT broker is available at `TEST_SERVER` port `1883` where `TEST_SERVER` is the address of the server where the services run.

To test the installation we can use the `examon_pub` plugin available in the `publishers/examon_pub` folder of this project.
To test the installation we can use the `examon_pub.py` plugin available in the `publishers/examon_pub` folder of this project.

It is highly recommended to follow the tutorial described in the Jupyter notebook `README-notebook.ipynb` to understand how an Examon plugin works.

After having installed and configured it on one or more test nodes we can start the data collection running for example:

```bash
[root@testnode00]$ ./examon_pub -b TEST_SERVER -p 1883 -t org/myorg -s 1 run
[root@testnode00]$ python ./examon_pub.py -b TEST_SERVER -p 1883 -s 1 run
```
If everything went well, the data are available both through the Grafana interface and using the `examon-client`.

Expand Down
3 changes: 3 additions & 0 deletions docs/About.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# About

ExaMon is an open source framework developed by Francesco Beneventi at [DEI - Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi"](https://dei.unibo.it/en/index.html) of the University of Bologna under the supervision of Profs. Luca Benini, Andrea Bartolini and Andrea Borghesi and in collaboration with [CINECA](https://www.hpc.cineca.it/) and [E4](https://www.e4company.com/en/).
78 changes: 78 additions & 0 deletions docs/Administrators/Getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# ExaMon Docker Setup
This setup will install all server-side components of the ExaMon framework:

- MQTT broker and Db connector
- Grafana
- KairosDB
- Cassandra

## Prerequisites
Since Cassandra is the component that requires the majority of resources, you can find more details about the suggested hardware configuration of the system that will host the services here:

[Hardware Configuration](https://cassandra.apache.org/doc/latest/operating/hardware.html#:~:text=While%20Cassandra%20can%20be%20made,at%20least%2032GB%20of%20RAM)

To install all the services needed by ExaMon we will use Docker and Docker Compose:

[Install Docker and Docker Compose](https://docs.docker.com/engine/installation/).


## Setup

### Clone the Git repository

First you will need to clone the Git repository:

```bash
git clone https://github.com/ExamonHPC/examon.git
```

### Create Docker Services

Once you have the above setup, you need to create the Docker services:

```bash
docker compose up -d
```

This will build the Docker images and fetch some prebuilt images and then start the services. You can refer to the `docker-compose.yml` file to see the full configuration.

### Configure Grafana

Log in to the Grafana server using your browser and the default credentials:

http://localhost:3000

Follow the normal procedure for adding a new data source (KairosDB):

[Add a Datasource](https://grafana.com/docs/grafana/latest/datasources/add-a-data-source/)

Fill out the form with the following settings:

- Type: `KairosDB`
- Name: `kairosdb`
- Url: http://kairosdb:8083
- Access: `Server`

## Usage Examples

### Collecting data using the dummy "examon_pub" plugin
Once all Docker services are running (can be started either by `docker compose up -d` or `docker compose start`), the MQTT broker is available at `TEST_SERVER` port `1883` where `TEST_SERVER` is the address of the server where the services run.

To test the installation we can use the `examon_pub.py` plugin available in the `publishers/examon_pub` folder of this project.

It is highly recommended to follow the tutorial described in the Jupyter notebook `README-notebook.ipynb` to understand how an Examon plugin works.

After having installed and configured it on one or more test nodes we can start the data collection running for example:

```bash
[root@testnode00]$ python ./examon_pub.py -b TEST_SERVER -p 1883 -s 1 run
```
If everything went well, the data are available both through the Grafana interface and using the [examon-client](../Users/Demo_ExamonQL.ipynb).


## Where to go next

- Write your first plugin: [Example plugin](../Plugins/examon_pub.ipynb)
- Write your first query: [Example query](../Users/Demo_ExamonQL.ipynb)


23 changes: 23 additions & 0 deletions docs/Introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
<figure markdown>
![](images/image1.png){ width="300" }
</figure>

ExaMon (Exascale Monitoring) is a data collection and analysis platform designed to manage large amounts of data. Its main prerogatives are to easily manage heterogeneous data, both in streaming and batch mode and to provide access to this data through a common interface. This simplifies the use of data to support applications such as real-time anomaly detection, predictive maintenance, and efficient resource and energy management leveraging machine learning and artificial intelligence techniques. Due to its scalable and distributed nature, it is easily applicable to HPC systems, especially exascale-sized ones, the primary use case for which it was designed.

The key feature of the framework is its data model, designed to be schema-less and scalable. In this way, it allows to collect a huge amount of heterogeneous data under a single interface. This data lake, which makes all the data available online to any user at any time, is proposed as a solution to break down internal data silos in organizations. The main benefit of this approach is that it enables capturing the full value of the data by making it immediately usable. In addition, having all the data in one place makes it easier to create complete and immediate executive reports, enabling faster and more informed decisions.

Another key aspect of the framework's design is making industry data easily available for research purposes. Indeed, researchers only need to manage a single data source to have a complete picture of complex industrial systems, and the benefits can be many. The ease of access to a huge variety and quantity of real-world data will enable them to create innovative solutions with results that may have real-world impact.

<figure markdown align="center">
![](images/image13.png){ width="80%" }
</figure>

Furthermore, access to a wide variety of heterogeneous data with very low latency enables the realization of accurate digital twins. In this regard, the framework can provide both historical data for building accurate models, and fresh data for quickly making inferences on the same models. Moreover, the availability of up-to-date data in near real-time allows the construction of visual models that enable the rapid acquisition of knowledge about the state of any complex system. In fact, by exploiting the language of visual communication, it is possible to extend collaboration by bringing together a wide range of experts focused on problem-solving or optimization of the system itself.

<figure markdown align="center">
![](images/image3.png){ width="80%" }
</figure>

The architecture of the framework is based on established protocols and technologies rather than specific tools and implementations. The communication layer is based on the publish-subscribe model that finds various implementations, such as in the MQTT protocol. The need to interact with different data sources, ranging from complex room cooling systems to internal CPU sensors, requires a simple, scalable, low-latency communication protocol that is resilient to network conditions and natively designed to enable machine-to-machine (M2M) communication in complex environments. Moreover, data persistence is handled by a NoSQL-type database, an industry-proven technology, designed to be horizontally scalable and built to efficiently handle large amounts of data. On top of these two pillars, the other components are primarily dedicated to handling the two main categories of data that characterize the ExaMon framework. The first is the time series data type, which represents the majority of the data sources managed by ExaMon and is suitable for managing all the sensors and logs available today in a data center. The second is the generic tabular data type, suitable for managing metadata and any other data that does not fall into the first category. ExaMon provides the tools and interfaces to coordinate these two categories and interface them with the user in the most seamless way.

As a data platform, one of ExaMon's priorities is data sharing. To maximize its effectiveness, it offers both domain-specific interfaces (DSLs), which allow more experienced users to take full advantage of the data source's capabilities, and more high-level, standard interfaces such as the ANSI SQL language. Again, ExaMon promotes tools that are state of the art for time series data visualization, such as Grafana. Although more experienced users can interface with ExaMon using tools such as Jupyter notebooks (via a dedicated client), more user-friendly BI solutions such as Apache Superset, which uses web visualization technologies and the ANSI SQL language, are also provided to streamline the user experience. There is also compatibility with tools such as Apache Spark and Dask for large-scale data analysis in both streaming and batch modes. Finally, CLI-type tools are also available to provide access to the data and typical features directly from the user's shell.
Loading

0 comments on commit a4f83e5

Please sign in to comment.