Skip to content

Text to Image using Stable Diffusion for Student Cluster Competition 2024

MLPerf Reference Implementation in Python

Tip

  • MLCommons reference implementations are only meant to provide a rules compliant reference implementation for the submitters and in most cases are not best performing. If you want to benchmark any system, it is advisable to use the vendor MLPerf implementation for that system like Nvidia, Intel etc.

SDXL

Edge category

In the edge category, sdxl has Offline scenarios and all the scenarios are mandatory for a closed division submission.

Pytorch framework

CPU device

Please click here to see the minimum system requirements for running the benchmark

  • Disk Space: 50GB
Docker Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Docker Container Build and Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cpu  \
   --docker --quiet \
   --test_query_count=50  
The above command should get you to an interactive shell inside the docker container and do a quick test run for the Offline scenario. Once inside the docker container please do the below commands to do the accuracy + performance runs for each scenario.

Please click here to see more options for the docker launch

  • --docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>: to use a custom fork of cm4mlops repository inside the docker image

  • --docker_cache=no: to not use docker cache during the image build

  • --docker_os=ubuntu: ubuntu and rhel are supported.
  • --docker_os_version=20.04: [20.04, 22.04] are supported for Ubuntu and [8, 9] for RHEL
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=cpu \
   --quiet 
Native Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cpu  \
   --quiet \
   --test_query_count=50  
The above command should do a test run of Offline scenario and record the estimated offline_target_qps.

Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=cpu \
   --quiet 
CUDA device

Please click here to see the minimum system requirements for running the benchmark

  • Device Memory: 24GB(fp32), 16GB(fp16)

  • Disk Space: 50GB

Docker Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Docker Container Build and Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cuda  \
   --docker --quiet \
   --test_query_count=50  
The above command should get you to an interactive shell inside the docker container and do a quick test run for the Offline scenario. Once inside the docker container please do the below commands to do the accuracy + performance runs for each scenario.

Please click here to see more options for the docker launch

  • --docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>: to use a custom fork of cm4mlops repository inside the docker image

  • --docker_cache=no: to not use docker cache during the image build

Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=cuda \
   --quiet 
Native Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cuda  \
   --quiet \
   --test_query_count=50  
The above command should do a test run of Offline scenario and record the estimated offline_target_qps.

Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=cuda \
   --quiet 
ROCm device

Please click here to see the minimum system requirements for running the benchmark

  • Disk Space: 50GB
Native Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=rocm  \
   --quiet \
   --test_query_count=50  
The above command should do a test run of Offline scenario and record the estimated offline_target_qps.

Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=reference \
   --framework=pytorch \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=rocm \
   --quiet 

Nvidia MLPerf Implementation

SDXL

Edge category

In the edge category, sdxl has Offline scenarios and all the scenarios are mandatory for a closed division submission.

TensorRT framework

CUDA device

Please click here to see the minimum system requirements for running the benchmark

  • Device Memory: 16GB

  • Disk Space: 50GB

Docker Environment

Please refer to the installation page to install CM for running the automated benchmark commands.

# Docker Container Build and Performance Estimation for Offline Scenario

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev,_short \
   --model=sdxl \
   --implementation=nvidia \
   --framework=tensorrt \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cuda  \
   --docker --quiet \
   --test_query_count=50  
The above command should get you to an interactive shell inside the docker container and do a quick test run for the Offline scenario. Once inside the docker container please do the below commands to do the accuracy + performance runs for each scenario.

Please click here to see more options for the docker launch

  • --docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>: to use a custom fork of cm4mlops repository inside the docker image

  • --docker_cache=no: to not use docker cache during the image build

  • --gpu_name=<Name of the GPU> : The GPUs with supported configs in CM are orin, rtx_4090, rtx_a6000, rtx_6000_ada, l4, t4and a100. For other GPUs, default configuration as per the GPU memory will be used.
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
   --model=sdxl \
   --implementation=nvidia \
   --framework=tensorrt \
   --category=edge \
   --scenario=Offline \
   --execution_mode=valid \
   --device=cuda \
   --quiet