Skip to content

app-mlperf-inference-mlcommons-cpp

Automatically generated README for this automation recipe: app-mlperf-inference-mlcommons-cpp

Category: Modular MLPerf inference benchmark pipeline

License: Apache 2.0

Developers: Thomas Zhu, Arjun Suresh, Grigori Fursin * Notes from the authors, contributors and users: README-extra

  • CM meta description for this script: _cm.yaml
  • Output cached? False

Reuse this script in your project

Install MLCommons CM automation meta-framework

Pull CM repository with this automation recipe (CM script)

cm pull repo mlcommons@cm4mlops

cmr "app mlcommons mlperf inference cpp" --help

Run this script

Run this script via CLI
cm run script --tags=app,mlcommons,mlperf,inference,cpp[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "app mlcommons mlperf inference cpp [variations]" [--input_flags]
Run this script from Python
import cmind

r = cmind.access({'action':'run'
              'automation':'script',
              'tags':'app,mlcommons,mlperf,inference,cpp'
              'out':'con',
              ...
              (other input keys for this script)
              ...
             })

if r['return']>0:
    print (r['error'])
Run this script via Docker (beta)
cm docker script "app mlcommons mlperf inference cpp[variations]" [--input_flags]

Variations

  • Group "batch-size"

    Click here to expand this section.

    • _batch-size.#
      • ENV variables:
        • CM_MLPERF_LOADGEN_MAX_BATCHSIZE: #
  • Group "device"

    Click here to expand this section.

    • _cpu (default)
      • ENV variables:
        • CM_MLPERF_DEVICE: cpu
    • _cuda
      • ENV variables:
        • CM_MLPERF_DEVICE: gpu
        • CM_MLPERF_DEVICE_LIB_NAMESPEC: cudart
  • Group "framework"

    Click here to expand this section.

    • _onnxruntime (default)
      • ENV variables:
        • CM_MLPERF_BACKEND: onnxruntime
        • CM_MLPERF_BACKEND_LIB_NAMESPEC: onnxruntime
    • _pytorch
      • ENV variables:
        • CM_MLPERF_BACKEND: pytorch
    • _tf
      • ENV variables:
        • CM_MLPERF_BACKEND: tf
    • _tflite
      • ENV variables:
        • CM_MLPERF_BACKEND: tflite
    • _tvm-onnx
      • ENV variables:
        • CM_MLPERF_BACKEND: tvm-onnx
  • Group "loadgen-scenario"

    Click here to expand this section.

    • _multistream
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: MultiStream
    • _offline (default)
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: Offline
    • _server
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: Server
    • _singlestream
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: SingleStream
        • CM_MLPERF_LOADGEN_MAX_BATCHSIZE: 1
  • Group "model"

    Click here to expand this section.

    • _resnet50 (default)
      • ENV variables:
        • CM_MODEL: resnet50
    • _retinanet
      • ENV variables:
        • CM_MODEL: retinanet
Default variations

_cpu,_offline,_onnxruntime,_resnet50

Script flags mapped to environment

  • --count=valueCM_MLPERF_LOADGEN_QUERY_COUNT=value
  • --max_batchsize=valueCM_MLPERF_LOADGEN_MAX_BATCHSIZE=value
  • --mlperf_conf=valueCM_MLPERF_CONF=value
  • --mode=valueCM_MLPERF_LOADGEN_MODE=value
  • --output_dir=valueCM_MLPERF_OUTPUT_DIR=value
  • --performance_sample_count=valueCM_MLPERF_LOADGEN_PERFORMANCE_SAMPLE_COUNT=value
  • --scenario=valueCM_MLPERF_LOADGEN_SCENARIO=value
  • --user_conf=valueCM_MLPERF_USER_CONF=value

Default environment

These keys can be updated via --env.KEY=VALUE or env dictionary in @input.json or using script flags.

  • CM_BATCH_COUNT: 1
  • CM_BATCH_SIZE: 1
  • CM_FAST_COMPILATION: yes
  • CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX: cpp

Script output

cmr "app mlcommons mlperf inference cpp [variations]" [--input_flags] -j