generate-nvidia-engine
Automatically generated README for this automation recipe: generate-nvidia-engine
Category: MLPerf benchmark support
License: Apache 2.0
This CM script is in draft stage
- CM meta description for this script: _cm.yaml
- Output cached? False
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "generate engine mlperf inference nvidia" --help
Run this script
Run this script via CLI
cm run script --tags=generate,engine,mlperf,inference,nvidia[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "generate engine mlperf inference nvidia [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'generate,engine,mlperf,inference,nvidia'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "generate engine mlperf inference nvidia[variations]" [--input_flags]
Variations
-
No group (any combination of variations can be selected)
Click here to expand this section.
_batch_size.#
- ENV variables:
- CM_MODEL_BATCH_SIZE:
None
- CM_MODEL_BATCH_SIZE:
- ENV variables:
_copy_streams.#
- ENV variables:
- CM_GPU_COPY_STREAMS:
None
- CM_GPU_COPY_STREAMS:
- ENV variables:
_cuda
- ENV variables:
- CM_MLPERF_DEVICE:
gpu
- CM_MLPERF_DEVICE_LIB_NAMESPEC:
cudart
- CM_MLPERF_DEVICE:
- ENV variables:
-
Group "device"
Click here to expand this section.
_cpu
(default)- ENV variables:
- CM_MLPERF_DEVICE:
cpu
- CM_MLPERF_DEVICE:
- ENV variables:
-
Group "model"
Click here to expand this section.
_resnet50
(default)- ENV variables:
- CM_MODEL:
resnet50
- CM_MODEL:
- ENV variables:
_retinanet
- ENV variables:
- CM_MODEL:
retinanet
- CM_MODEL:
- ENV variables:
Default variations
_cpu,_resnet50
Script flags mapped to environment
--output_dir=value
→CM_MLPERF_OUTPUT_DIR=value
Default environment
These keys can be updated via --env.KEY=VALUE
or env
dictionary in @input.json
or using script flags.
- CM_BATCH_COUNT:
1
- CM_BATCH_SIZE:
1
- CM_LOADGEN_SCENARIO:
Offline
- CM_GPU_COPY_STREAMS:
1
- CM_TENSORRT_WORKSPACE_SIZE:
4194304
Native script being run
No run file exists for Windows
Script output
cmr "generate engine mlperf inference nvidia [variations]" [--input_flags] -j