push-mlperf-inference-results-to-github
Automatically generated README for this automation recipe: push-mlperf-inference-results-to-github
Category: MLPerf benchmark support
License: Apache 2.0
- CM meta description for this script: _cm.json
- Output cached? False
Reuse this script in your project
Install MLCommons CM automation meta-framework
Pull CM repository with this automation recipe (CM script)
cm pull repo mlcommons@cm4mlops
Print CM help from the command line
cmr "push mlperf mlperf-inference-results publish-results inference submission github" --help
Run this script
Run this script via CLI
cm run script --tags=push,mlperf,mlperf-inference-results,publish-results,inference,submission,github [--input_flags]
Run this script via CLI (alternative)
cmr "push mlperf mlperf-inference-results publish-results inference submission github " [--input_flags]
Run this script from Python
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'push,mlperf,mlperf-inference-results,publish-results,inference,submission,github'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
Run this script via Docker (beta)
cm docker script "push mlperf mlperf-inference-results publish-results inference submission github" [--input_flags]
Script flags mapped to environment
--branch=value
→CM_GIT_BRANCH=value
--commit_message=value
→CM_MLPERF_RESULTS_REPO_COMMIT_MESSAGE=value
--repo_branch=value
→CM_GIT_BRANCH=value
--repo_url=value
→CM_MLPERF_RESULTS_GIT_REPO_URL=value
--submission_dir=value
→CM_MLPERF_INFERENCE_SUBMISSION_DIR=value
Default environment
These keys can be updated via --env.KEY=VALUE
or env
dictionary in @input.json
or using script flags.
- CM_MLPERF_RESULTS_GIT_REPO_URL:
https://github.com/ctuning/mlperf_inference_submissions_v4.0
Native script being run
No run file exists for Windows
Script output
cmr "push mlperf mlperf-inference-results publish-results inference submission github " [--input_flags] -j