run-mlperf-inference-mobilenet-models
Automatically generated README for this automation recipe: run-mlperf-inference-mobilenet-models
Category: MLPerf benchmark support
License: Apache 2.0
Set up
We need to get imagenet full dataset to make image-classification submissions for MLPerf inference. Since this dataset is not publicly available via a URL please follow the instructions given here to download the dataset and register in CM.
Click here to set up docker (Optional).
### Docker Setup CM commands are expected to run natively but if you prefer not to modify the host system, you can do the below command to set up a docker container.cm docker script --tags=run,mobilenet-models,_tflite,_accuracy-only \
--adr.compiler.tags=gcc \
--docker_cm_repo=mlcommons@cm4mlops \
--imagenet_path=$HOME/imagenet-2012-val \
--results_dir=$HOME/mobilenet_results \
--submission_dir=$HOME/inference_submission_3.1 \
--docker_skip_run_cmd
Run Commands
Since the runs can take many hours, in case you are running remotely you can install screen as follows. You may omit "screen" from all commands if you are running on a host system.
cmr "get generic-sys-util _screen"
Default tflite
Do a full accuracy run for all the models (can take almost a day)
screen cmr "run mobilenet-models _tflite _accuracy-only" \
--adr.compiler.tags=gcc \
--results_dir=$HOME/mobilenet_results
Do a full performance run for all the models (can take almost a day)
screen cmr "run mobilenet-models _tflite _performance-only" \
--adr.compiler.tags=gcc \
--results_dir=$HOME/mobilenet_results
Generate README files for all the runs
cmr "run mobilenet-models _tflite _populate-readme" \
--adr.compiler.tags=gcc \
--results_dir=$HOME/mobilenet_results
Generate actual submission tree
We should use the master branch of MLCommons inference repo for the submission checker. You can use --hw_note_extra
option to add your name to the notes.
cmr "generate inference submission" \
--results_dir=$HOME/mobilenet_results/valid_results \
--submission_dir=$HOME/mobilenet_submission_tree \
--clean \
--infer_scenario_results=yes \
--adr.compiler.tags=gcc --adr.inference-src.version=master \
--run-checker \
--submitter=cTuning \
--hw_notes_extra="Result taken by NAME"
--hw_name="My system name"
to give a meaningful system name. Examples can be seen here
Push the results to GitHub repo
First, create a fork of this repo. Then run the following command after replacing --repo_url
with your fork URL.
cmr "push github mlperf inference submission" \
--submission_dir=$HOME/mobilenet_submission_tree \
--repo_url=https://github.com/ctuning/mlperf_inference_submissions_v3.1/ \
--commit_message="Mobilenet results added"
Create a PR to cTuning repo
Using ARMNN with NEON
Follow the same procedure as above but for the first three experiment runs add _armnn,_neon
to the tags. For example
cmr "run mobilenet-models _tflite _armnn _neon _accuracy-only" \
--adr.compiler.tags=gcc \
--results_dir=$HOME/mobilenet_results
results_dir
and submission_dir
can be the same as before as results will be going to different subfolders.
Using ARMNN with OpenCL
Follow the same procedure as above but for the first three experiment runs add _armnn,_opencl
to the tags. For example
cmr "run mobilenet-models _tflite _armnn _opencl _accuracy-only" \
--adr.compiler.tags=gcc \
--results_dir=$HOME/mobilenet_results
results_dir
and submission_dir
can be the same as before as results will be going to different subfolders.
- CM meta description for this script: _cm.json
- Output cached? False
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "run mobilenet models image-classification mobilenet-models mlperf inference" --help
Run this script
Run this script via CLI
cm run script --tags=run,mobilenet,models,image-classification,mobilenet-models,mlperf,inference[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "run mobilenet models image-classification mobilenet-models mlperf inference [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'run,mobilenet,models,image-classification,mobilenet-models,mlperf,inference'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "run mobilenet models image-classification mobilenet-models mlperf inference[variations]" [--input_flags]
Variations
-
No group (any combination of variations can be selected)
Click here to expand this section.
_armnn
- ENV variables:
- CM_MLPERF_USE_ARMNN_LIBRARY:
yes
- CM_MLPERF_USE_ARMNN_LIBRARY:
- ENV variables:
_neon
- Aliases:
_use-neon
- ENV variables:
- CM_MLPERF_USE_NEON:
yes
- CM_MLPERF_USE_NEON:
- Aliases:
_only-fp32
- ENV variables:
- CM_MLPERF_RUN_INT8:
no
- CM_MLPERF_RUN_INT8:
- ENV variables:
_only-int8
- ENV variables:
- CM_MLPERF_RUN_FP32:
no
- CM_MLPERF_RUN_FP32:
- ENV variables:
_opencl
- ENV variables:
- CM_MLPERF_USE_OPENCL:
yes
- CM_MLPERF_USE_OPENCL:
- ENV variables:
-
Group "base-framework"
Click here to expand this section.
_tflite
(default)
-
Group "model-selection"
Click here to expand this section.
_all-models
(default)- ENV variables:
- CM_MLPERF_RUN_MOBILENETS:
yes
- CM_MLPERF_RUN_EFFICIENTNETS:
yes
- CM_MLPERF_RUN_MOBILENETS:
- ENV variables:
_efficientnet
- ENV variables:
- CM_MLPERF_RUN_EFFICIENTNETS:
yes
- CM_MLPERF_RUN_EFFICIENTNETS:
- ENV variables:
_mobilenet
- ENV variables:
- CM_MLPERF_RUN_MOBILENETS:
yes
- CM_MLPERF_RUN_MOBILENETS:
- ENV variables:
-
Group "optimization"
Click here to expand this section.
_tflite-default
(default)- ENV variables:
- CM_MLPERF_TFLITE_DEFAULT_MODE:
yes
- CM_MLPERF_TFLITE_DEFAULT_MODE:
- ENV variables:
-
Group "run-mode"
Click here to expand this section.
_accuracy-only
- ENV variables:
- CM_MLPERF_FIND_PERFORMANCE_MODE:
no
- CM_MLPERF_ACCURACY_MODE:
yes
- CM_MLPERF_SUBMISSION_MODE:
no
- CM_MLPERF_FIND_PERFORMANCE_MODE:
- ENV variables:
_find-performance
- ENV variables:
- CM_MLPERF_FIND_PERFORMANCE_MODE:
yes
- CM_MLPERF_SUBMISSION_MODE:
no
- CM_MLPERF_FIND_PERFORMANCE_MODE:
- ENV variables:
_performance-only
- ENV variables:
- CM_MLPERF_FIND_PERFORMANCE_MODE:
no
- CM_MLPERF_PERFORMANCE_MODE:
yes
- CM_MLPERF_SUBMISSION_MODE:
no
- CM_MLPERF_FIND_PERFORMANCE_MODE:
- ENV variables:
_populate-readme
- ENV variables:
- CM_MLPERF_FIND_PERFORMANCE_MODE:
no
- CM_MLPERF_POPULATE_README:
yes
- CM_MLPERF_FIND_PERFORMANCE_MODE:
- ENV variables:
_submission
- ENV variables:
- CM_MLPERF_FIND_PERFORMANCE_MODE:
no
- CM_MLPERF_SUBMISSION_MODE:
yes
- CM_MLPERF_FIND_PERFORMANCE_MODE:
- ENV variables:
Default variations
_all-models,_tflite,_tflite-default
Script flags mapped to environment
--find-performance=value
→CM_MLPERF_FIND_PERFORMANCE_MODE=value
--imagenet_path=value
→IMAGENET_PATH=value
--no-rerun=value
→CM_MLPERF_NO_RERUN=value
--power=value
→CM_MLPERF_POWER=value
--results_dir=value
→CM_MLPERF_INFERENCE_RESULTS_DIR=value
--submission=value
→CM_MLPERF_SUBMISSION_MODE=value
--submission_dir=value
→CM_MLPERF_INFERENCE_SUBMISSION_DIR=value
Default environment
These keys can be updated via --env.KEY=VALUE
or env
dictionary in @input.json
or using script flags.
- CM_MLPERF_RUN_MOBILENETS:
no
- CM_MLPERF_RUN_EFFICIENTNETS:
no
- CM_MLPERF_NO_RERUN:
no
- CM_MLPERF_RUN_FP32:
yes
- CM_MLPERF_RUN_INT8:
yes
Native script being run
No run file exists for Windows
Script output
cmr "run mobilenet models image-classification mobilenet-models mlperf inference [variations]" [--input_flags] -j