import-mlperf-inference-to-experiment
Automatically generated README for this automation recipe: import-mlperf-inference-to-experiment
Category: MLPerf benchmark support
License: Apache 2.0
Developers: Grigori Fursin * Notes from the authors, contributors and users: README-extra
- CM meta description for this script: _cm.yaml
- Output cached? False
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "import mlperf inference mlperf-inference experiment 2experiment to-experiment" --help
Run this script
Run this script via CLI
cm run script --tags=import,mlperf,inference,mlperf-inference,experiment,2experiment,to-experiment[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "import mlperf inference mlperf-inference experiment 2experiment to-experiment [variations]" [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'import,mlperf,inference,mlperf-inference,experiment,2experiment,to-experiment'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "import mlperf inference mlperf-inference experiment 2experiment to-experiment[variations]" [--input_flags]
Variations
-
No group (any combination of variations can be selected)
Click here to expand this section.
_skip_checker
- ENV variables:
- CM_SKIP_SUBMISSION_CHECKER:
True
- CM_SKIP_SUBMISSION_CHECKER:
- ENV variables:
Script flags mapped to environment
--submitter=value
→CM_MLPERF_SUBMITTER=value
--target_repo=value
→CM_IMPORT_MLPERF_INFERENCE_TARGET_REPO=value
Script output
cmr "import mlperf inference mlperf-inference experiment 2experiment to-experiment [variations]" [--input_flags] -j