Skip to content

benchmark-any-mlperf-inference-implementation

Automatically generated README for this automation recipe: benchmark-any-mlperf-inference-implementation

Category: MLPerf benchmark support

License: Apache 2.0

  • CM meta description for this script: _cm.yaml
  • Output cached? False

Reuse this script in your project

Install MLCommons CM automation meta-framework

Pull CM repository with this automation recipe (CM script)

cm pull repo mlcommons@cm4mlops

cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models" --help

Run this script

Run this script via CLI
cm run script --tags=benchmark,run,natively,all,inference,any,mlperf,mlperf-implementation,implementation,mlperf-models[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models [variations]" [--input_flags]
Run this script from Python
import cmind

r = cmind.access({'action':'run'
              'automation':'script',
              'tags':'benchmark,run,natively,all,inference,any,mlperf,mlperf-implementation,implementation,mlperf-models'
              'out':'con',
              ...
              (other input keys for this script)
              ...
             })

if r['return']>0:
    print (r['error'])
Run this script via Docker (beta)
cm docker script "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models[variations]" [--input_flags]

Variations

  • Group "implementation"

    Click here to expand this section.

    • _deepsparse
      • ENV variables:
        • DIVISION: open
        • IMPLEMENTATION: deepsparse
    • _intel
      • ENV variables:
        • IMPLEMENTATION: intel
    • _mil
      • ENV variables:
        • IMPLEMENTATION: mil
    • _nvidia
      • ENV variables:
        • IMPLEMENTATION: nvidia-original
    • _qualcomm
      • ENV variables:
        • IMPLEMENTATION: qualcomm
    • _reference
      • ENV variables:
        • IMPLEMENTATION: reference
    • _tflite-cpp
      • ENV variables:
        • IMPLEMENTATION: tflite_cpp
  • Group "power"

    Click here to expand this section.

    • _performance-only (default)
    • _power
      • ENV variables:
        • POWER: True
  • Group "sut"

    Click here to expand this section.

    • _aws-dl2q.24xlarge
    • _macbookpro-m1
      • ENV variables:
        • CATEGORY: edge
        • DIVISION: closed
    • _mini
    • _orin
    • _orin.32g
      • ENV variables:
        • CATEGORY: edge
        • DIVISION: closed
    • _phoenix
      • ENV variables:
        • CATEGORY: edge
        • DIVISION: closed
    • _rb6
    • _rpi4
    • _sapphire-rapids.24c
      • ENV variables:
        • CATEGORY: edge
        • DIVISION: closed
Default variations

_performance-only

Script flags mapped to environment

  • --backends=valueBACKENDS=value
  • --category=valueCATEGORY=value
  • --devices=valueDEVICES=value
  • --division=valueDIVISION=value
  • --extra_args=valueEXTRA_ARGS=value
  • --models=valueMODELS=value
  • --power_server=valuePOWER_SERVER=value
  • --power_server_port=valuePOWER_SERVER_PORT=value

Default environment

These keys can be updated via --env.KEY=VALUE or env dictionary in @input.json or using script flags.

  • DIVISION: open
  • CATEGORY: edge

Native script being run

No run file exists for Windows


Script output

cmr "benchmark run natively all inference any mlperf mlperf-implementation implementation mlperf-models [variations]" [--input_flags] -j