Question and Answering using Bert Large for IndySCC 2024
Introduction
This guide is designed for the IndySCC 2024 to walk participants through running and optimizing the MLPerf Inference Benchmark using Bert Large across various software and hardware configurations. The goal is to maximize system throughput (measured in samples per second) without compromising accuracy.
For a valid MLPerf inference submission, two types of runs are required: a performance run and an accuracy run. In this competition, we focus on the Offline
scenario, where throughput is the key metric—higher values are better. The official MLPerf inference benchmark for Bert Large requires processing a minimum of 10833 samples in both performance and accuracy modes using the Squad v1.1 dataset.
Scoring
In the IndySCC 2024, your objective will be to run a reference (unoptimized) Python implementation of the MLPerf inference benchmark to complete a successful submission passing the submission checker. Only one of the available framework needs to be submitted.
Info
Both MLPerf and CM automation are evolving projects. If you encounter issues or have questions, please submit them here
Artifacts to submit to the SCC committee
All the needed files are automatically pushed to the GitHub repository if you manage to complete the given commands. No additional files need to be submitted.
MLPerf Reference Implementation in Python
Tip
- MLCommons reference implementations are only meant to provide a rules compliant reference implementation for the submitters and in most cases are not best performing. If you want to benchmark any system, it is advisable to use the vendor MLPerf implementation for that system like Nvidia, Intel etc.
BERT-99
Edge category
In the edge category, bert-99 has Offline scenarios and all the scenarios are mandatory for a closed division submission.
Pytorch framework
CPU device
Please click here to see the minimum system requirements for running the benchmark
- Disk Space: 50GB
Docker Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Docker Container Build and Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cpu \
--docker --quiet \
--test_query_count=100
Please click here to see more options for the docker launch
-
--docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>
: to use a custom fork of cm4mlops repository inside the docker image -
--docker_cm_repo_branch=<Custom CM GitHub repo Branch>
: to checkout a custom branch of the cloned cm4mlops repository inside the docker image -
--docker_cache=no
: to not use docker cache during the image build --docker_os=ubuntu
: ubuntu and rhel are supported.--docker_os_version=20.04
: [20.04, 22.04] are supported for Ubuntu and [8, 9] for RHEL
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cpu \
--quiet
Native Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cpu \
--quiet \
--test_query_count=100
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cpu \
--quiet
CUDA device
Please click here to see the minimum system requirements for running the benchmark
-
Device Memory: 8GB
-
Disk Space: 50GB
Docker Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Docker Container Build and Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cuda \
--docker --quiet \
--test_query_count=500
Please click here to see more options for the docker launch
-
--docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>
: to use a custom fork of cm4mlops repository inside the docker image -
--docker_cm_repo_branch=<Custom CM GitHub repo Branch>
: to checkout a custom branch of the cloned cm4mlops repository inside the docker image -
--docker_cache=no
: to not use docker cache during the image build
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cuda \
--quiet
Native Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
Tip
- It is advisable to use the commands in the Docker tab for CUDA. Run the below native command only if you are already on a CUDA setup with cuDNN and TensorRT installed.
# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cuda \
--quiet \
--test_query_count=500
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cuda \
--quiet
ROCm device
Please click here to see the minimum system requirements for running the benchmark
- Disk Space: 50GB
Native Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=rocm \
--quiet \
--test_query_count=100
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=pytorch \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=rocm \
--quiet
Deepsparse framework
CPU device
Please click here to see the minimum system requirements for running the benchmark
- Disk Space: 50GB
Docker Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Docker Container Build and Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=deepsparse \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cpu \
--docker --quiet \
--test_query_count=100\
--env.CM_MLPERF_NEURALMAGIC_MODEL_ZOO_STUB=zoo:nlp/question_answering/mobilebert-none/pytorch/huggingface/squad/base_quant-none
Please click here to see more options for the docker launch
-
--docker_cm_repo=<Custom CM GitHub repo URL in username@repo format>
: to use a custom fork of cm4mlops repository inside the docker image -
--docker_cm_repo_branch=<Custom CM GitHub repo Branch>
: to checkout a custom branch of the cloned cm4mlops repository inside the docker image -
--docker_cache=no
: to not use docker cache during the image build --docker_os=ubuntu
: ubuntu and rhel are supported.--docker_os_version=20.04
: [20.04, 22.04] are supported for Ubuntu and [8, 9] for RHEL
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=deepsparse \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cpu \
--quiet \
--env.CM_MLPERF_NEURALMAGIC_MODEL_ZOO_STUB=zoo:nlp/question_answering/mobilebert-none/pytorch/huggingface/squad/base_quant-none
Native Environment
Please refer to the installation page to install CM for running the automated benchmark commands.
# Setup a virtual environment for Python
cm run script --tags=install,python-venv --name=mlperf
export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
# Performance Estimation for Offline Scenario
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=deepsparse \
--category=edge \
--scenario=Offline \
--execution_mode=test \
--device=cpu \
--quiet \
--test_query_count=100\
--env.CM_MLPERF_NEURALMAGIC_MODEL_ZOO_STUB=zoo:nlp/question_answering/mobilebert-none/pytorch/huggingface/squad/base_quant-none
Offline
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--model=bert-99 \
--implementation=reference \
--framework=deepsparse \
--category=edge \
--scenario=Offline \
--execution_mode=valid \
--device=cpu \
--quiet \
--env.CM_MLPERF_NEURALMAGIC_MODEL_ZOO_STUB=zoo:nlp/question_answering/mobilebert-none/pytorch/huggingface/squad/base_quant-none
Submission Commands
Generate actual submission tree
cm run script --tags=generate,inference,submission \
--clean \
--preprocess_submission=yes \
--run-checker \
--tar=yes \
--env.CM_TAR_OUTFILE=submission.tar.gz \
--division=open \
--category=edge \
--env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
--run_style=test \
--quiet \
--submitter=<Team Name>
- Use
--hw_name="My system name"
to give a meaningful system name.
Push Results to GitHub
Fork the mlperf-inference-results-scc24
branch of the repository URL at https://github.com/mlcommons/cm4mlperf-inference.
Run the following command after replacing --repo_url
with your GitHub fork URL.
cm run script --tags=push,github,mlperf,inference,submission \
--repo_url=https://github.com/<myfork>/cm4mlperf-inference \
--repo_branch=mlperf-inference-results-scc24 \
--commit_message="Results on system <HW Name>" \
--quiet
Once uploaded give a Pull Request to the origin repository. Github action will be running there and once finished you can see your submitted results at https://docs.mlcommons.org/cm4mlperf-inference.