build-mlperf-inference-server-nvidia
Automatically generated README for this automation recipe: build-mlperf-inference-server-nvidia
Category: MLPerf benchmark support
License: Apache 2.0
-
Notes from the authors, contributors and users: README-extra
-
CM meta description for this script: _cm.yaml
- Output cached? True
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "build mlcommons mlperf inference inference-server server nvidia-harness nvidia" --help
Run this script
Run this script via CLI
cm run script --tags=build,mlcommons,mlperf,inference,inference-server,server,nvidia-harness,nvidia[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "build mlcommons mlperf inference inference-server server nvidia-harness nvidia [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'build,mlcommons,mlperf,inference,inference-server,server,nvidia-harness,nvidia'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "build mlcommons mlperf inference inference-server server nvidia-harness nvidia[variations]" [--input_flags]
Variations
-
Group "code"
Click here to expand this section.
_ctuning
(default)_custom
_go
_mlcommons
_nvidia-only
-
Group "device"
Click here to expand this section.
_cpu
- ENV variables:
- CM_MLPERF_DEVICE:
cpu
- CM_MLPERF_DEVICE:
- ENV variables:
_cuda
(default)- ENV variables:
- CM_MLPERF_DEVICE:
cuda
- CM_MLPERF_DEVICE_LIB_NAMESPEC:
cudart
- CM_MLPERF_DEVICE:
- ENV variables:
_inferentia
- ENV variables:
- CM_MLPERF_DEVICE:
inferentia
- CM_MLPERF_DEVICE:
- ENV variables:
-
Group "version"
Click here to expand this section.
_r4.0
Default variations
_ctuning,_cuda
Script flags mapped to environment
--clean=value
→CM_MAKE_CLEAN=value
--custom_system=value
→CM_CUSTOM_SYSTEM_NVIDIA=value
Default environment
These keys can be updated via --env.KEY=VALUE
or env
dictionary in @input.json
or using script flags.
- CM_MAKE_BUILD_COMMAND:
build
- CM_MAKE_CLEAN:
no
- CM_CUSTOM_SYSTEM_NVIDIA:
yes
Versions
Default version: r3.1
r2.1
r3.0
r3.1
r4.0
Native script being run
No run file exists for Windows
Script output
cmr "build mlcommons mlperf inference inference-server server nvidia-harness nvidia [variations]" [--input_flags] -j