Skip to content

get-ml-model-gptj

Automatically generated README for this automation recipe: get-ml-model-gptj

Category: AI/ML models

License: Apache 2.0

  • CM meta description for this script: _cm.json
  • Output cached? True

Reuse this script in your project

Install MLCommons CM automation meta-framework

Pull CM repository with this automation recipe (CM script)

cm pull repo mlcommons@cm4mlops

cmr "get raw ml-model gptj gpt-j large-language-model" --help

Run this script

Run this script via CLI
cm run script --tags=get,raw,ml-model,gptj,gpt-j,large-language-model[,variations] [--input_flags]
Run this script via CLI (alternative)
cmr "get raw ml-model gptj gpt-j large-language-model [variations]" [--input_flags]
Run this script from Python
import cmind

r = cmind.access({'action':'run'
              'automation':'script',
              'tags':'get,raw,ml-model,gptj,gpt-j,large-language-model'
              'out':'con',
              ...
              (other input keys for this script)
              ...
             })

if r['return']>0:
    print (r['error'])
Run this script via Docker (beta)
cm docker script "get raw ml-model gptj gpt-j large-language-model[variations]" [--input_flags]

Variations

  • No group (any combination of variations can be selected)

    Click here to expand this section.

    • _batch_size.#
      • ENV variables:
        • CM_ML_MODEL_BATCH_SIZE: #
  • Group "download-tool"

    Click here to expand this section.

    • _rclone (default)
      • ENV variables:
        • CM_DOWNLOAD_FILENAME: checkpoint
        • CM_DOWNLOAD_URL: <<<CM_RCLONE_URL>>>
    • _wget
      • ENV variables:
        • CM_DOWNLOAD_URL: <<<CM_PACKAGE_URL>>>
        • CM_DOWNLOAD_FILENAME: checkpoint.zip
  • Group "framework"

    Click here to expand this section.

    • _pytorch (default)
      • ENV variables:
        • CM_ML_MODEL_DATA_LAYOUT: NCHW
        • CM_ML_MODEL_FRAMEWORK: pytorch
        • CM_ML_STARTING_WEIGHTS_FILENAME: <<<CM_PACKAGE_URL>>>
    • _saxml
  • Group "model-provider"

    Click here to expand this section.

    • _intel
    • _mlcommons (default)
    • _nvidia
      • ENV variables:
        • CM_TMP_ML_MODEL_PROVIDER: nvidia
  • Group "precision"

    Click here to expand this section.

    • _fp32
      • ENV variables:
        • CM_ML_MODEL_INPUT_DATA_TYPES: fp32
        • CM_ML_MODEL_PRECISION: fp32
        • CM_ML_MODEL_WEIGHT_DATA_TYPES: fp32
    • _fp8
      • ENV variables:
        • CM_ML_MODEL_INPUT_DATA_TYPES: fp8
        • CM_ML_MODEL_WEIGHT_DATA_TYPES: fp8
    • _int4
      • ENV variables:
        • CM_ML_MODEL_INPUT_DATA_TYPES: int4
        • CM_ML_MODEL_WEIGHT_DATA_TYPES: int4
    • _int8
      • ENV variables:
        • CM_ML_MODEL_INPUT_DATA_TYPES: int8
        • CM_ML_MODEL_PRECISION: int8
        • CM_ML_MODEL_WEIGHT_DATA_TYPES: int8
    • _uint8
      • ENV variables:
        • CM_ML_MODEL_INPUT_DATA_TYPES: uint8
        • CM_ML_MODEL_PRECISION: uint8
        • CM_ML_MODEL_WEIGHT_DATA_TYPES: uint8
Default variations

_mlcommons,_pytorch,_rclone

Script flags mapped to environment

  • --checkpoint=valueGPTJ_CHECKPOINT_PATH=value
  • --download_path=valueCM_DOWNLOAD_PATH=value
  • --to=valueCM_DOWNLOAD_PATH=value

Native script being run


Script output

cmr "get raw ml-model gptj gpt-j large-language-model [variations]" [--input_flags] -j