- Commits to the master branch of Slang will trigger a CI workflow that runs the benchmark and uploads the JSON file to this repository (specifically to
benchmarks.json
). - Changes to
benchmark.json
will trigger a CI workflow in this repository. - This workflow updates the
gh-pages
repository usinggithub-action-benchmark
. It reads the new results frombenchmarks.json
and updates a database ingh-pages
. - Once
gh-pages
receives the update, it will trigger the final CI workflow that builds the GitHub Pages site.
The diagram below summarizes these steps.
Notes:
- In (1), the worflow from the Slang repository overrides
benchmarks.json
when it pushes to this repository; this is expected to happen. - This repository contains another file,
currrent
, which holds the latest commit's message and hash for debugging purposes. - Each time
benchmarks.json
is updated (2), thegithub-action-benchmark
workflow reads its contents and appends a database that is embedded in a Javascript file. - There is currently no behaviour to limit the number of entries in the database, so it can grow to a couple megabytes. Manually trimming the data is possible by directly editing the Javascript file and removing entries.
It is possible to run the benchmark locally to see immediate results or to customize the results. This requires the having cloning both the Slang repository and the MDL-SDK fork. Then, starting from the root of the Slang repository:
- Build the
Release
version of slangc; it should be located inbuild/Release/bin/slangc
. - Change directories to
tool/benchmark
within the Slang repository. - Copy the Slang shaders:
- From
<MDL-SDK REPO ROOT>/examples/mdl_sdk/dxr/content/slangified
(*.slang
files specifically) - To
tool/benchmark
- From
- Run the
compile.py
script using Python (3.12+ recommended):- Requires some light packages, which can be installed with
pip
using:pip install prettytable argparse
. - Linux users may have to tweak the script to set the
slangc
variable to end withslangc
instead ofslangc.exe
. - Script options:
--target
to select target mode:spirv
generates SPIRV directly.spirv-glsl
generates SPIRV via glsl.dxil
generates DXIL through DXC.dxil-embedded
generates DXIL through DXC, but precomiles slang modules to DXIL
--samples
to set the number of times to repeat and average the measurements over.--output
path to the JSON file where results will be stored.
- Requires some light packages, which can be installed with
The script will output timings in a Markdown friendly way, as shown below:
- Each chart is titled with the format
<SHADER STAGE> : <COMPILATION MODE> : <TARGET>
-
<SHADER-STAGE>
is one ofclosesthit
,anyhit
, andshadow
-
<COMPILATION-MODE>
is eithermono
for monolithic compilation ormodule
for modular compilation. -
<TARGET>
is currently fixed to DXIL.- Other targets can be generated by running the benchmarking script with a different target (DXIL or SPIRV; with or without precompiled modules).
-
- The
$x$ -axis tracks the commit hash. Unfortunately there is currently no way to display concrete dates. - The
$y$ -axis shows the time, in milliseconds, taken to compile the specific shader stage under the particular compilation mode and target.
In case there is a commit which results in alarming measurements, there is a convenient way to reach the original commit/PR. Each data point of each graph can be highlighted as so:
Clicking on the node will result in a redirection to the associated commit in this repository.
Clicking on the highlighted link will then redirect the user to original commit/PR in the Slang repository.