Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup: Refactor benchmark table. #128

Merged
merged 1 commit into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 11 additions & 8 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,20 @@

## Run

Run the `run_all.py` script to run all benchmarks.
If you want to evaluate a single benchmark, adopt the `run_single.py` script
and (within the `benchmarks` directory) run

```bash
python run_all.py
python run_single.py
```

If you only want to evaluate a single benchmark, adopt the `run_single.py` script
and run
To run a specific benchmark, e.g., the sine benchmarks, run

```bash
python run_single.py
python sine/run_all.py
```

## Visualize
## MLFlow

We use MLFlow to log the benchmark runs and you can run

Expand All @@ -25,8 +25,11 @@ mlflow ui

to visualize the logged experiments in your browser.

In order to visualize the best runs for every benchmark in an HTML table, run
## Documentation

In order to process the data of the best runs for the benchmark page in
the documentation, run

```bash
python process.py
python results/process.py
```
Empty file removed benchmarks/html/img/.gitkeep
Empty file.
Loading
Loading