diff --git a/.github/workflows/publish.yaml b/.github/workflows/publish.yaml
new file mode 100644
index 000000000..214ac3611
--- /dev/null
+++ b/.github/workflows/publish.yaml
@@ -0,0 +1,33 @@
+# This is a basic workflow to help you get started with Actions
+
+name: Publish site
+
+
+on:
+ release:
+ types: [published]
+ push:
+ branches:
+ - master
+ - docs
+
+jobs:
+
+ publish:
+ name: Publish the site
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout repository normally
+ uses: actions/checkout@v3
+
+ - name: Set up Python
+ uses: actions/setup-python@v4
+ with:
+ python-version: "3.11"
+
+ - name: Install Mkdocs
+ run: pip install -r docs/requirements.txt
+
+ - name: Run Mkdocs deploy
+ run: mkdocs gh-deploy --force
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 000000000..477c13a7b
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,9 @@
+# Documentation Website for MLPerf Inference using the unified CM interface
+
+## Commands to get the website running locally
+```
+git clone https://github.com/GATEOverflow/cm4mlperf-inference
+cd cm4mlperf-inference
+pip install -r requirements.txt
+mkdocs serve
+```
diff --git a/docs/benchmarks/image_classification/resnet50.md b/docs/benchmarks/image_classification/resnet50.md
new file mode 100644
index 000000000..1a77db65a
--- /dev/null
+++ b/docs/benchmarks/image_classification/resnet50.md
@@ -0,0 +1,68 @@
+# Image Classification using ResNet50
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ ResNet50 validation run uses the Imagenet 2012 validation dataset consisting of 50,000 images.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,imagenet,validation -j
+ ```
+=== "Calibration"
+ ResNet50 calibration dataset consist of 500 images selected from the Imagenet 2012 validation dataset. There are 2 alternative options for the calibration dataset.
+
+ ### Get Calibration Dataset Using Option 1
+ ```
+ cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option1 -j
+ ```
+ ### Get Calibration Dataset Using Option 2
+ ```
+ cm run script --tags=get,dataset,imagenet,calibration,_mlperf.option2 -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf ResNet50 Model
+
+=== "Tensorflow"
+
+ ### Tensorflow
+ ```
+ cm run script --tags=get,ml-model,resnet50,_tensorflow -j
+ ```
+=== "Onnx"
+
+ ### Onnx
+ ```
+ cm run script --tags=get,ml-model,resnet50,_onnx -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+{{ mlperf_inference_implementation_readme (4, "resnet50", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "resnet50", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "resnet50", "intel") }}
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "resnet50", "qualcomm") }}
+
+=== "MLCommon-C++"
+ ### MLPerf Modular Implementation in C++
+
+{{ mlperf_inference_implementation_readme (4, "resnet50", "cpp") }}
diff --git a/docs/benchmarks/index.md b/docs/benchmarks/index.md
new file mode 100644
index 000000000..7f528638b
--- /dev/null
+++ b/docs/benchmarks/index.md
@@ -0,0 +1,28 @@
+# MLPerf Inference Benchmarks
+
+Please visit the individual benchmark links to see the run commands using the unified CM interface.
+
+1. [Image Classification](benchmarks/image_classification/resnet50.md) using ResNet50 model and Imagenet-2012 dataset
+
+2. [Text to Image](benchmarks/text_to_image/sdxl.md) using Stable Diffusion model and Coco2014 dataset
+
+3. [Object Detection](object_detection/retinanet.md) using Retinanet model and OpenImages dataset
+
+4. [Image Segmentation](medical_imaging/3d-unet.md) using 3d-unet model and KiTS19 dataset
+
+5. [Question Answering](language/bert.md) using Bert-Large model and Squad v1.1 dataset
+
+6. [Text Summarization](language/gpt-j.md) using GPT-J model and CNN Daily Mail dataset
+
+7. [Text Summarization](language/llama2-70b.md) using LLAMA2-70b model and OpenORCA dataset
+
+8. [Recommendation](recommendation/dlrm-v2.md) using DLRMv2 model and Criteo multihot dataset
+
+All the eight benchmarks can participate in the datacenter category.
+All the eight benchmarks except DLRMv2 and LLAMA2 and can participate in the edge category.
+
+`bert`, `llama2-70b`, `dlrm_v2` and `3d-unet` has a high accuracy (99.9%) variant, where the benchmark run must achieve a higher accuracy of at least `99.9%` of the FP32 reference model
+in comparison with the `99%` default accuracy requirement.
+
+The `dlrm_v2` benchmark has a high-accuracy variant only. If this accuracy is not met, the submission result can be submitted only to the open division.
+
diff --git a/docs/benchmarks/language/bert.md b/docs/benchmarks/language/bert.md
new file mode 100644
index 000000000..e2aa0995d
--- /dev/null
+++ b/docs/benchmarks/language/bert.md
@@ -0,0 +1,73 @@
+# Question Answering using Bert-Large
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ BERT validation run uses the SQuAD v1.1 dataset.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,squad,validation -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf Bert-Large Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,bert-large,_pytorch -j
+ ```
+=== "Onnx"
+
+ ### Onnx
+ ```
+ cm run script --tags=get,ml-model,bert-large,_onnx -j
+ ```
+=== "Tensorflow"
+
+ ### Tensorflow
+ ```
+ cm run script --tags=get,ml-model,bert-large,_tensorflow -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+ BERT-99
+{{ mlperf_inference_implementation_readme (4, "bert-99", "reference") }}
+
+ BERT-99.9
+{{ mlperf_inference_implementation_readme (4, "bert-99.9", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+ BERT-99
+{{ mlperf_inference_implementation_readme (4, "bert-99", "nvidia") }}
+
+ BERT-99.9
+{{ mlperf_inference_implementation_readme (4, "bert-99.9", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+ BERT-99
+{{ mlperf_inference_implementation_readme (4, "bert-99", "intel") }}
+
+ BERT-99.9
+{{ mlperf_inference_implementation_readme (4, "bert-99.9", "intel") }}
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+ BERT-99
+{{ mlperf_inference_implementation_readme (4, "bert-99", "qualcomm") }}
+
+ BERT-99.9
+{{ mlperf_inference_implementation_readme (4, "bert-99.9", "qualcomm") }}
diff --git a/docs/benchmarks/language/gpt-j.md b/docs/benchmarks/language/gpt-j.md
new file mode 100644
index 000000000..d1c351214
--- /dev/null
+++ b/docs/benchmarks/language/gpt-j.md
@@ -0,0 +1,57 @@
+# Text Summarization using GPT-J
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ GPT-J validation run uses the CNNDM dataset.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,cnndm,validation -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf GPT-J Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,gptj,_pytorch -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+ GPT-J-99
+{{ mlperf_inference_implementation_readme (4, "gptj-99", "reference") }}
+
+ GPTJ-99.9
+{{ mlperf_inference_implementation_readme (4, "gptj-99.9", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+ GPTJ-99
+{{ mlperf_inference_implementation_readme (4, "gptj-99", "nvidia") }}
+
+ GPTJ-99.9
+{{ mlperf_inference_implementation_readme (4, "gptj-99.9", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+ GPTJ-99
+{{ mlperf_inference_implementation_readme (4, "gptj-99", "intel") }}
+
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+ GPTJ-99
+{{ mlperf_inference_implementation_readme (4, "gptj-99", "qualcomm") }}
+
diff --git a/docs/benchmarks/language/llama2-70b.md b/docs/benchmarks/language/llama2-70b.md
new file mode 100644
index 000000000..7f8052aef
--- /dev/null
+++ b/docs/benchmarks/language/llama2-70b.md
@@ -0,0 +1,52 @@
+# Text Summarization using LLAMA2-70b
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ LLAMA2-70b validation run uses the Open ORCA dataset.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,openorca,validation -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf LLAMA2-70b Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,llama2-70b,_pytorch -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+ LLAMA2-70b-99
+{{ mlperf_inference_implementation_readme (4, "llama2-70b-99", "reference") }}
+
+ LLAMA2-70b-99.9
+{{ mlperf_inference_implementation_readme (4, "llama2-70b-99.9", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+ LLAMA2-70b-99
+{{ mlperf_inference_implementation_readme (4, "llama2-70b-99", "nvidia") }}
+
+ LLAMA2-70b-99.9
+{{ mlperf_inference_implementation_readme (4, "llama2-70b-99.9", "nvidia") }}
+
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+ LLAMA2-70b-99
+{{ mlperf_inference_implementation_readme (4, "llama2-70b-99", "qualcomm") }}
+
diff --git a/docs/benchmarks/medical_imaging/3d-unet.md b/docs/benchmarks/medical_imaging/3d-unet.md
new file mode 100644
index 000000000..bd3ccae40
--- /dev/null
+++ b/docs/benchmarks/medical_imaging/3d-unet.md
@@ -0,0 +1,60 @@
+# Medical Imaging using 3d-unet (KiTS 2019 kidney tumor segmentation task)
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ 3d-unet validation run uses the KiTS19 dataset performing [KiTS 2019](https://kits19.grand-challenge.org/) kidney tumor segmentation task
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,kits19,validation -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf 3d-unet Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,3d-unet,_pytorch -j
+ ```
+=== "Onnx"
+
+ ### Onnx
+ ```
+ cm run script --tags=get,ml-model,3d-unet,_onnx -j
+ ```
+=== "Tensorflow"
+
+ ### Tensorflow
+ ```
+ cm run script --tags=get,ml-model,3d-unet,_tensorflow -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+ 3d-unet-99.9
+{{ mlperf_inference_implementation_readme (4, "3d-unet-99.9", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+ 3d-unet-99
+{{ mlperf_inference_implementation_readme (4, "3d-unet-99", "nvidia") }}
+
+ 3d-unet-99.9
+{{ mlperf_inference_implementation_readme (4, "3d-unet-99.9", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+ 3d-unet-99
+{{ mlperf_inference_implementation_readme (4, "3d-unet-99", "intel") }}
+
+ 3d-unet-99.9
+{{ mlperf_inference_implementation_readme (4, "3d-unet-99.9", "intel") }}
diff --git a/docs/benchmarks/object_detection/retinanet.md b/docs/benchmarks/object_detection/retinanet.md
new file mode 100644
index 000000000..f500f616d
--- /dev/null
+++ b/docs/benchmarks/object_detection/retinanet.md
@@ -0,0 +1,63 @@
+# Object Detection using Retinanet
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ Retinanet validation run uses the OpenImages v6 MLPerf validation dataset resized to 800x800 and consisting of 24,576 images.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,openimages,_validation -j
+ ```
+=== "Calibration"
+ Retinanet calibration dataset consist of 500 images selected from the OpenImages v6 dataset.
+
+ ```
+ cm run script --tags=get,dataset,openimages,_calibration -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf Retinanet Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,retinanet,_pytorch -j
+ ```
+=== "Onnx"
+
+ ### Onnx
+ ```
+ cm run script --tags=get,ml-model,retinanet,_onnx -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+{{ mlperf_inference_implementation_readme (4, "retinanet", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "retinanet", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "retinanet", "intel") }}
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "retinanet", "qualcomm") }}
+
+=== "MLCommon-C++"
+ ### MLPerf Modular Implementation in C++
+
+{{ mlperf_inference_implementation_readme (4, "retinanet", "cpp") }}
diff --git a/docs/benchmarks/recommendation/dlrm-v2.md b/docs/benchmarks/recommendation/dlrm-v2.md
new file mode 100644
index 000000000..1294b008b
--- /dev/null
+++ b/docs/benchmarks/recommendation/dlrm-v2.md
@@ -0,0 +1,36 @@
+# Recommendation using DLRM v2
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ DLRM validation run uses the Criteo dataset (Day 23).
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,criteo,validation -j
+ ```
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf DLRM v2 Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,dlrm_v2,_pytorch -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+{{ mlperf_inference_implementation_readme (4, "dlrm_v2-99.9", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "dlrm_v2-99.9", "nvidia") }}
+
diff --git a/docs/benchmarks/text_to_image/sdxl.md b/docs/benchmarks/text_to_image/sdxl.md
new file mode 100644
index 000000000..2e9c95c66
--- /dev/null
+++ b/docs/benchmarks/text_to_image/sdxl.md
@@ -0,0 +1,49 @@
+# Text to Image using Stable Diffusion
+
+## Dataset
+
+The benchmark implementation run command will automatically download the validation and calibration datasets and do the necessary preprocessing. In case you want to download only the datasets, you can use the below commands.
+
+=== "Validation"
+ Stable Diffusion validation run uses the Coco 2014 dataset.
+
+ ### Get Validation Dataset
+ ```
+ cm run script --tags=get,dataset,coco2014,_validation -j
+ ```
+
+## Model
+The benchmark implementation run command will automatically download the required model and do the necessary conversions. In case you want to only download the official model, you can use the below commands.
+
+Get the Official MLPerf Stable Diffusion Model
+
+=== "Pytorch"
+
+ ### Pytorch
+ ```
+ cm run script --tags=get,ml-model,sdxl,_pytorch -j
+ ```
+
+## Benchmark Implementations
+=== "MLCommons-Python"
+ ### MLPerf Reference Implementation in Python
+
+{{ mlperf_inference_implementation_readme (4, "sdxl", "reference") }}
+
+=== "Nvidia"
+ ### Nvidia MLPerf Implementation
+
+{{ mlperf_inference_implementation_readme (4, "sdxl", "nvidia") }}
+
+=== "Intel"
+ ### Intel MLPerf Implementation
+ GPTJ-99
+{{ mlperf_inference_implementation_readme (4, "sdxl", "intel") }}
+
+
+=== "Qualcomm"
+ ### Qualcomm AI100 MLPerf Implementation
+
+ GPTJ-99
+{{ mlperf_inference_implementation_readme (4, "sdxl", "qualcomm") }}
+
diff --git a/docs/changelog/changelog.md b/docs/changelog/changelog.md
new file mode 100644
index 000000000..8b9abcc4b
--- /dev/null
+++ b/docs/changelog/changelog.md
@@ -0,0 +1,2 @@
+# Release Notes
+
diff --git a/docs/changelog/index.md b/docs/changelog/index.md
new file mode 100644
index 000000000..f68abc5b1
--- /dev/null
+++ b/docs/changelog/index.md
@@ -0,0 +1,2 @@
+# What's New, What's Coming
+
diff --git a/docs/demos/index.md b/docs/demos/index.md
new file mode 100644
index 000000000..1c23a5f60
--- /dev/null
+++ b/docs/demos/index.md
@@ -0,0 +1,2 @@
+# Demos
+
diff --git a/docs/img/logo_v2.svg b/docs/img/logo_v2.svg
new file mode 100644
index 000000000..fb655c627
--- /dev/null
+++ b/docs/img/logo_v2.svg
@@ -0,0 +1,6 @@
+
+
diff --git a/docs/index.md b/docs/index.md
new file mode 120000
index 000000000..32d46ee88
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1 @@
+../README.md
\ No newline at end of file
diff --git a/docs/install/index.md b/docs/install/index.md
new file mode 100644
index 000000000..d90c59eef
--- /dev/null
+++ b/docs/install/index.md
@@ -0,0 +1,105 @@
+# Installation
+We use MLCommons CM Automation framework to run MLPerf inference benchmarks.
+
+## CM Install
+
+We have successfully tested CM on
+
+* Ubuntu 18.x, 20.x, 22.x , 23.x,
+* RedHat 8, RedHat 9, CentOS 8
+* macOS
+* Wndows 10, Windows 11
+
+=== "Ubuntu"
+ ### Ubuntu, Debian
+
+
+ ```bash
+ sudo apt update && sudo apt upgrade
+ sudo apt install python3 python3-pip python3-venv git wget curl
+ ```
+
+ **Note that you must set up virtual env on Ubuntu 23+ before using any Python project:**
+ ```bash
+ python3 -m venv cm
+ source cm/bin/activate
+ ```
+
+ You can now install CM via PIP:
+
+ ```bash
+ python3 -m pip install cmind
+ ```
+
+ You might need to do the following command to update the `PATH` to include the BIN paths from pip installs
+
+ ```bash
+ source $HOME/.profile
+ ```
+
+ You can check that CM is available by checking the `cm` command
+
+
+=== "Red Hat"
+ ### Red Hat
+
+ ```bash
+ sudo dnf update
+ sudo dnf install python3 python-pip git wget curl
+ python3 -m pip install cmind --user
+ ```
+
+=== "macOS"
+ ### macOS
+
+ *Note that CM currently does not work with Python installed from the Apple Store.
+ Please install Python via brew as described below.*
+
+ If `brew` package manager is not installed, please install it as follows (see details [here](https://brew.sh/)):
+ ```bash
+ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+ ```
+
+ Don't forget to add brew to PATH environment as described in the end of the installation output.
+
+ Then install python, pip, git and wget:
+
+ ```bash
+ brew install python3 git wget curl
+ python3 -m pip install cmind
+ ```
+
+=== "Windows"
+
+ ### Windows
+ * Configure Windows 10+ to [support long paths](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry#enable-long-paths-in-windows-10-version-1607-and-later) from command line as admin:
+
+ ```bash
+ reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem" /v LongPathsEnabled /t REG_DWORD /d 1 /f
+ ```
+
+ * Download and install Git from [git-for-windows.github.io](https://git-for-windows.github.io).
+ * Configure Git to accept long file names: `git config --system core.longpaths true`
+ * Download and install Python 3+ from [www.python.org/downloads/windows](https://www.python.org/downloads/windows).
+ * Don't forget to select option to add Python binaries to PATH environment!
+ * Configure Windows to accept long fie names during Python installation!
+
+ * Install CM via PIP:
+
+ ```bash
+ python -m pip install cmind
+ ```
+
+ *Note that we [have reports](https://github.com/mlcommons/ck/issues/844)
+ that CM does not work when Python was first installed from the Microsoft Store.
+ If CM fails to run, you can find a fix [here](https://stackoverflow.com/questions/57485491/python-python3-executes-in-command-prompt-but-does-not-run-correctly)*.
+
+Please visit the [official CM installation page](https://github.com/mlcommons/ck/blob/master/docs/installation.md) for more details
+
+## Download the CM MLOps Repository
+
+```bash
+ cm pull repo mlcommons@cm4mlops --branch=mlperf-inference
+```
+
+Now, you are ready to use the `cm` commands to run MLPerf inference as given in the [benchmarks](../benchmarks) page
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 000000000..39fab4e1f
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,4 @@
+mkdocs-material
+swagger-markdown
+mkdocs-macros-plugin
+ruamel.yaml
diff --git a/docs/submission/index.md b/docs/submission/index.md
new file mode 100644
index 000000000..b5ff53033
--- /dev/null
+++ b/docs/submission/index.md
@@ -0,0 +1,88 @@
+If you follow the `cm run` commands under the individual model pages in the [benchmarks](../benchmarks) directory, all the valid results will get aggregated to the `cm cache` folder. Once all the results across all the modelsare ready you can use the following command to generate a valid submission tree compliant with the [MLPerf requirements](https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#inference-1).
+
+## Generate actual submission tree
+
+=== "Closed Edge"
+ ### Closed Edge Submission
+ ```bash
+ cm run script -tags=generate,inference,submission \
+ --clean \
+ --preprocess_submission=yes \
+ --run-checker \
+ --submitter=MLCommons \
+ --tar=yes \
+ --env.CM_TAR_OUTFILE=submission.tar.gz \
+ --division=closed \
+ --category=edge \
+ --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
+ --quiet
+ ```
+
+=== "Closed Datacenter"
+ ### Closed Datacenter Submission
+ ```bash
+ cm run script -tags=generate,inference,submission \
+ --clean \
+ --preprocess_submission=yes \
+ --run-checker \
+ --submitter=MLCommons \
+ --tar=yes \
+ --env.CM_TAR_OUTFILE=submission.tar.gz \
+ --division=closed \
+ --category=datacenter \
+ --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
+ --quiet
+ ```
+=== "Open Edge"
+ ### Open Edge Submission
+ ```bash
+ cm run script -tags=generate,inference,submission \
+ --clean \
+ --preprocess_submission=yes \
+ --run-checker \
+ --submitter=MLCommons \
+ --tar=yes \
+ --env.CM_TAR_OUTFILE=submission.tar.gz \
+ --division=open \
+ --category=edge \
+ --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
+ --quiet
+ ```
+=== "Open Datacenter"
+ ### Closed Datacenter Submission
+ ```bash
+ cm run script -tags=generate,inference,submission \
+ --clean \
+ --preprocess_submission=yes \
+ --run-checker \
+ --submitter=MLCommons \
+ --tar=yes \
+ --env.CM_TAR_OUTFILE=submission.tar.gz \
+ --division=open \
+ --category=datacenter \
+ --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
+ --quiet
+ ```
+
+* Use `--hw_name="My system name"` to give a meaningful system name. Examples can be seen [here](https://github.com/mlcommons/inference_results_v3.0/tree/main/open/cTuning/systems)
+
+* Use `--submitter=` if your organization is an official MLCommons member and would like to submit under your organization
+
+* Use `--hw_notes_extra` option to add additional notes like `--hw_notes_extra="Result taken by NAME" `
+
+The above command should generate "submission.tar.gz" if there are no submission checker issues and you can upload it to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission).
+
+## Aggregate Results in GitHub
+
+If you are collecting results across multiple systems you can generate different submissions and aggregate all of them to a GitHub repository (can be private) and use it to generate a single tar ball which can be uploaded to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission).
+
+Run the following command after **replacing `--repo_url` with your GitHub repository URL**.
+
+```bash
+ cm run script --tags=push,github,mlperf,inference,submission \
+ --repo_url=https://github.com/GATEOverflow/mlperf_inference_submissions_v4.1 \
+ --commit_message="Results on added by " \
+ --quiet
+```
+
+At the end, you can download the github repo and upload to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission).
diff --git a/docs/submission/tools-readme.md b/docs/submission/tools-readme.md
new file mode 120000
index 000000000..d6f026eab
--- /dev/null
+++ b/docs/submission/tools-readme.md
@@ -0,0 +1 @@
+../../tools/submission/README.md
\ No newline at end of file
diff --git a/docs/usage/index.md b/docs/usage/index.md
new file mode 100644
index 000000000..2e92a6e00
--- /dev/null
+++ b/docs/usage/index.md
@@ -0,0 +1 @@
+# Using CM for MLPerf Inference
diff --git a/main.py b/main.py
new file mode 100644
index 000000000..d8facefce
--- /dev/null
+++ b/main.py
@@ -0,0 +1,103 @@
+def define_env(env):
+
+ @env.macro
+ def mlperf_inference_implementation_readme(spaces, model, implementation):
+ pre_space = ""
+ for i in range(1,spaces):
+ pre_space = pre_space + " "
+ f_pre_space = pre_space
+ pre_space += " "
+
+ content=""
+ if implementation == "reference":
+ devices = [ "CPU", "CUDA", "ROCm" ]
+ if model.lower() == "resnet50":
+ frameworks = [ "Onnxruntime", "Tensorflow", "Deepsparse" ]
+ elif model.lower() == "retinanet":
+ frameworks = [ "Onnxruntime", "Pytorch" ]
+ elif "bert" in model.lower():
+ frameworks = [ "Onnxruntime", "Pytorch", "Tensorflow" ]
+ else:
+ frameworks = [ "Pytorch" ]
+ elif implementation == "nvidia":
+ devices = [ "CUDA" ]
+ frameworks = [ "TensorRT" ]
+ elif implementation == "intel":
+ devices = [ "CPU" ]
+ frameworks = [ "Pytorch" ]
+ elif implementation == "qualcomm":
+ devices = [ "QAIC" ]
+ frameworks = [ "Glow" ]
+ elif implementation == "cpp":
+ devices = [ "CPU", "CUDA" ]
+ frameworks = [ "Onnxruntime" ]
+
+ if model.lower() == "bert-99.9":
+ categories = [ "Datacenter" ]
+ elif "dlrm" in model.lower() or "llama2" in model.lower():
+ categories = [ "Datacenter" ]
+ else:
+ categories = [ "Edge", "Datacenter" ]
+
+ for category in categories:
+ if category == "Edge":
+ scenarios = [ "Offline", "SingleStream" ]
+ if model.lower() in [ "resnet50", "retinanet" ]:
+ scenarios.append("Multistream")
+ elif category == "Datacenter":
+ scenarios = [ "Offline", "Server" ]
+
+ content += f"{pre_space}=== \"{category.lower()}\"\n\n"
+
+ cur_space = pre_space + " "
+ scenarios_string = ", ".join(scenarios)
+
+ content += f"{cur_space}#### {category} category \n\n{cur_space} In the {category.lower()} category, {model} has {scenarios_string} scenarios and all the scenarios are mandatory for a closed division submission.\n\n"
+
+
+ for framework in frameworks:
+ cur_space1 = cur_space + " "
+ content += f"{cur_space}=== \"{framework}\"\n"
+ content += f"{cur_space1}##### {framework} framework\n\n"
+
+ for device in devices:
+ if framework.lower() == "deepsparse":
+ if device.lower() != "cpu":
+ continue
+ cur_space2 = cur_space1 + " "
+ content += f"{cur_space1}=== \"{device}\"\n"
+ content += f"{cur_space2}###### {device} device\n\n"
+
+ for scenario in scenarios:
+ cur_space3 = cur_space2 + " "
+ content += f"{cur_space2}=== \"{scenario}\"\n{cur_space3}####### {scenario}\n"
+ run_cmd = mlperf_inference_run_command(spaces+16, model, implementation, framework.lower(), category.lower(), scenario, device.lower(), "valid")
+ content += run_cmd
+ content += f"{cur_space2}=== \"All Scenarios\"\n{cur_space3}####### All Scenarios\n"
+ run_cmd = mlperf_inference_run_command(spaces+16, model, implementation, framework.lower(), category.lower(), "All Scenarios", device.lower(), "valid")
+ content += run_cmd
+
+ return content
+
+
+ @env.macro
+ def mlperf_inference_run_command(spaces, model, implementation, framework, category, scenario, device="cpu", execution_mode="test", test_query_count="20"):
+ pre_space = ""
+ for i in range(1,spaces):
+ pre_space = pre_space + " "
+ f_pre_space = pre_space
+ pre_space += " "
+
+ if scenario == "All Scenarios":
+ scenario_variation_tag = ",_all-scenarios"
+ scenario_option = ""
+ else:
+ scenario_variation_tag = ""
+ scenario_option = f"\\\n {pre_space} --scenario={scenario}"
+
+ cmd_suffix = f" \\\n {pre_space} --docker"
+ #cmd_suffix = f""
+ if execution_mode == "test":
+ cmd_suffix += f" \\\n {pre_space} --test_query_count={test_query_count}"
+
+ return f"\n{f_pre_space} ```bash\n{f_pre_space} cm run script --tags=run-mlperf,inference{scenario_variation_tag} \\\n {pre_space} --model={model} \\\n {pre_space} --implementation={implementation} \\\n {pre_space} --framework={framework} \\\n {pre_space} --category={category} {scenario_option} \\\n {pre_space} --execution-mode={execution_mode} \\\n {pre_space} --device={device} {cmd_suffix}\n{f_pre_space} ```\n"
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 000000000..0d0f64152
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,66 @@
+site_name: MLPerf Inference Documentation
+repo_url: https://github.com/mlcommons/inference
+theme:
+ name: material
+ logo: img/logo_v2.svg
+ favicon: img/logo_v2.svg
+ palette:
+ primary: deep purple
+ accent: green
+ features:
+ - content.tabs.link
+ - content.code.copy
+ - navigation.expand
+ - navigation.sections
+ - navigation.indexes
+ - navigation.instant
+ - navigation.tabs
+ - navigation.tabs.sticky
+ - navigation.top
+ - toc.follow
+nav:
+ - Inference:
+ - index.md
+ - Install:
+ - install/index.md
+ - Quick Start: install/quick-start.md
+ - Benchmarks:
+ - benchmarks/index.md
+ - Image Classification:
+ - ResNet50: benchmarks/image_classification/resnet50.md
+ - Text to Image:
+ - Stable Diffusion: benchmarks/text_to_image/sdxl.md
+ - Object Detection:
+ - RetinaNet: benchmarks/object_detection/retinanet.md
+ - Medical Imaging:
+ - 3d-unet: benchmarks/medical_imaging/3d-unet.md
+ - Language Processing:
+ - Bert-Large: benchmarks/language/bert.md
+ - GPT-J: benchmarks/language/gpt-j.md
+ - LLAMA2-70B: benchmarks/language/llama2-70b.md
+ - Recommendation:
+ - DLRM-v2: benchmarks/recommendation/dlrm-v2.md
+ - Submission:
+ - Submission Generation: submission/index.md
+ - Release Notes:
+ - What's New: changelog/index.md
+ - Changelog: changelog/changelog.md
+
+markdown_extensions:
+ - pymdownx.tasklist:
+ custom_checkbox: true
+ - pymdownx.details
+ - admonition
+ - attr_list
+ - def_list
+ - footnotes
+ - pymdownx.superfences:
+ custom_fences:
+ - name: mermaid
+ class: mermaid
+ format: !!python/name:pymdownx.superfences.fence_code_format
+ - pymdownx.tabbed:
+ alternate_style: true
+plugins:
+ - search
+ - macros