Skip to content

Submission Generation

Streamline your MLPerf results using CM Framework

If you have followed the cm run commands under the individual model pages in the benchmarks directory, all the valid results will get aggregated to the cm cache folder. The following command could be used to browse the structure of inference results folder generated by CM.

Get results folder structure

cm find cache --tags=get,mlperf,inference,results,dir | xargs tree

If you have not followed the cm run commands under the individual model pages in the benchmarks directory, please make sure that the result directory is structured in the following way.

└── System description ID(SUT Name)
    ├── system_meta.json
    └── Benchmark
        └── Scenario
            ├── Performance
            |   └── run_x/#1 run for all scenarios
            |       ├── mlperf_log_summary.txt
            |       └── mlperf_log_detail.txt
            ├── Accuracy
            |   ├── mlperf_log_summary.txt
            |   ├── mlperf_log_detail.txt
            |   ├── mlperf_log_accuracy.json
            |   └── accuracy.txt
            └── Compliance_Test_ID
                ├── Performance
                |   └── run_x/#1 run for all scenarios
                |       ├── mlperf_log_summary.txt
                |       └── mlperf_log_detail.txt
                ├── Accuracy
                |   ├── baseline_accuracy.txt
                |   ├── compliance_accuracy.txt
                |   ├── mlperf_log_accuracy.json
                |   └── accuracy.txt
                ├── verify_performance.txt
                └── verify_accuracy.txt #for TEST01 only

Click here if you are submitting in open division

  • The model_mapping.json should be included inside the SUT folder which is used to map the custom model full name to the official model name. The format of json file is:

    {
        "custom_model_name_for_model1":"official_model_name_for_model1",
        "custom_model_name_for_model2":"official_model_name_for_model2",

    }

Once all the results across all the models are ready you can use the following command to generate a valid submission tree compliant with the MLPerf requirements.

Generate actual submission tree

Closed Edge Submission

cm run script --tags=generate,inference,submission \
   --clean \
   --preprocess_submission=yes \
   --run-checker \
   --submitter=MLCommons \
   --tar=yes \
   --env.CM_TAR_OUTFILE=submission.tar.gz \
   --division=closed \
   --category=edge \
   --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
   --quiet

Closed Datacenter Submission

cm run script --tags=generate,inference,submission \
   --clean \
   --preprocess_submission=yes \
   --run-checker \
   --submitter=MLCommons \
   --tar=yes \
   --env.CM_TAR_OUTFILE=submission.tar.gz \
   --division=closed \
   --category=datacenter \
   --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
   --quiet

Open Edge Submission

cm run script --tags=generate,inference,submission \
   --clean \
   --preprocess_submission=yes \
   --run-checker \
   --submitter=MLCommons \
   --tar=yes \
   --env.CM_TAR_OUTFILE=submission.tar.gz \
   --division=open \
   --category=edge \
   --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
   --quiet

Closed Datacenter Submission

cm run script --tags=generate,inference,submission \
   --clean \
   --preprocess_submission=yes \
   --run-checker \
   --submitter=MLCommons \
   --tar=yes \
   --env.CM_TAR_OUTFILE=submission.tar.gz \
   --division=open \
   --category=datacenter \
   --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
   --quiet
  • Use --hw_name="My system name" to give a meaningful system name. Examples can be seen here

  • Use --submitter=<Your name> if your organization is an official MLCommons member and would like to submit under your organization

  • Use --hw_notes_extra option to add additional notes like --hw_notes_extra="Result taken by NAME"

  • Use --results_dir option to specify the results folder for Non CM based benchmarks

The above command should generate "submission.tar.gz" if there are no submission checker issues and you can upload it to the MLCommons Submission UI.

Aggregate Results in GitHub

If you are collecting results across multiple systems you can generate different submissions and aggregate all of them to a GitHub repository (can be private) and use it to generate a single tar ball which can be uploaded to the MLCommons Submission UI.

Run the following command after replacing --repo_url with your GitHub repository URL.

cm run script --tags=push,github,mlperf,inference,submission \
   --repo_url=https://github.com/GATEOverflow/mlperf_inference_submissions_v4.1 \
   --commit_message="Results on <HW name> added by <Name>" \
   --quiet

At the end, you can download the github repo and upload to the MLCommons Submission UI.