Skip to content

app-mlperf-inference-dummy

Automatically generated README for this automation recipe: app-mlperf-inference-dummy

Category: Modular MLPerf benchmarks

License: Apache 2.0

  • CM meta description for this script: _cm.yaml
  • Output cached? False

Reuse this script in your project

Install MLCommons CM automation meta-framework

Pull CM repository with this automation recipe (CM script)

cm pull repo mlcommons@cm4mlops

cmr "reproduce mlcommons mlperf inference harness dummy-harness dummy" --help

Run this script

Run this script via CLI
cm run script --tags=reproduce,mlcommons,mlperf,inference,harness,dummy-harness,dummy[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "reproduce mlcommons mlperf inference harness dummy-harness dummy [variations]" [--input_flags]
Run this script from Python
import cmind

r = cmind.access({'action':'run'
              'automation':'script',
              'tags':'reproduce,mlcommons,mlperf,inference,harness,dummy-harness,dummy'
              'out':'con',
              ...
              (other input keys for this script)
              ...
             })

if r['return']>0:
    print (r['error'])
Run this script via Docker (beta)
cm docker script "reproduce mlcommons mlperf inference harness dummy-harness dummy[variations]" [--input_flags]

Variations

  • Group "backend"

    Click here to expand this section.

    • _pytorch (default)
      • ENV variables:
        • CM_MLPERF_BACKEND: pytorch
  • Group "batch-size"

    Click here to expand this section.

    • _bs.#
  • Group "device"

    Click here to expand this section.

    • _cpu (default)
      • ENV variables:
        • CM_MLPERF_DEVICE: cpu
    • _cuda
      • ENV variables:
        • CM_MLPERF_DEVICE: gpu
        • CM_MLPERF_DEVICE_LIB_NAMESPEC: cudart
  • Group "loadgen-scenario"

    Click here to expand this section.

    • _multistream
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: MultiStream
    • _offline
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: Offline
    • _server
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: Server
    • _singlestream
      • ENV variables:
        • CM_MLPERF_LOADGEN_SCENARIO: SingleStream
  • Group "model"

    Click here to expand this section.

    • _bert-99
      • ENV variables:
        • CM_MODEL: bert-99
        • CM_SQUAD_ACCURACY_DTYPE: float32
    • _bert-99.9
      • ENV variables:
        • CM_MODEL: bert-99.9
    • _gptj-99
      • ENV variables:
        • CM_MODEL: gptj-99
        • CM_SQUAD_ACCURACY_DTYPE: float32
    • _gptj-99.9
      • ENV variables:
        • CM_MODEL: gptj-99.9
    • _llama2-70b-99
      • ENV variables:
        • CM_MODEL: llama2-70b-99
    • _llama2-70b-99.9
      • ENV variables:
        • CM_MODEL: llama2-70b-99.9
    • _resnet50 (default)
      • ENV variables:
        • CM_MODEL: resnet50
    • _retinanet
      • ENV variables:
        • CM_MODEL: retinanet
  • Group "precision"

    Click here to expand this section.

    • _fp16
    • _fp32
    • _uint8
Default variations

_cpu,_pytorch,_resnet50

Script flags mapped to environment

  • --count=valueCM_MLPERF_LOADGEN_QUERY_COUNT=value
  • --max_batchsize=valueCM_MLPERF_LOADGEN_MAX_BATCHSIZE=value
  • --mlperf_conf=valueCM_MLPERF_CONF=value
  • --mode=valueCM_MLPERF_LOADGEN_MODE=value
  • --multistream_target_latency=valueCM_MLPERF_LOADGEN_MULTISTREAM_TARGET_LATENCY=value
  • --offline_target_qps=valueCM_MLPERF_LOADGEN_OFFLINE_TARGET_QPS=value
  • --output_dir=valueCM_MLPERF_OUTPUT_DIR=value
  • --performance_sample_count=valueCM_MLPERF_LOADGEN_PERFORMANCE_SAMPLE_COUNT=value
  • --rerun=valueCM_RERUN=value
  • --results_repo=valueCM_MLPERF_INFERENCE_RESULTS_REPO=value
  • --scenario=valueCM_MLPERF_LOADGEN_SCENARIO=value
  • --server_target_qps=valueCM_MLPERF_LOADGEN_SERVER_TARGET_QPS=value
  • --singlestream_target_latency=valueCM_MLPERF_LOADGEN_SINGLESTREAM_TARGET_LATENCY=value
  • --skip_preprocess=valueCM_SKIP_PREPROCESS_DATASET=value
  • --skip_preprocessing=valueCM_SKIP_PREPROCESS_DATASET=value
  • --target_latency=valueCM_MLPERF_LOADGEN_TARGET_LATENCY=value
  • --target_qps=valueCM_MLPERF_LOADGEN_TARGET_QPS=value
  • --user_conf=valueCM_MLPERF_USER_CONF=value

Default environment

These keys can be updated via --env.KEY=VALUE or env dictionary in @input.json or using script flags.

  • CM_MLPERF_LOADGEN_SCENARIO: Offline
  • CM_MLPERF_LOADGEN_MODE: performance
  • CM_SKIP_PREPROCESS_DATASET: no
  • CM_SKIP_MODEL_DOWNLOAD: no
  • CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX: dummy_harness
  • CM_MLPERF_SKIP_RUN: no

Native script being run

No run file exists for Windows


Script output

cmr "reproduce mlcommons mlperf inference harness dummy-harness dummy [variations]" [--input_flags] -j