Skip to content

LNST-project/evaluator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

lnst-evaluate-helper

A CLI tool for evaluating LNST (Linux Network Stack Testing) Recipe test results into a simple PASS/FAIL format.

LNST produces detailed test result files containing device configuration, job execution logs, and performance measurements. This tool loads those files, extracts key information about the test run (recipe name, parameters), and applies configurable evaluation criteria to produce a clear pass/fail verdict.

Installation

Requires Python 3.13+.

pip install .

Or with uv:

uv sync

Usage

lnst-evaluate-helper --dir results/ --brief --evaluation basic

Input

Specify files and directories to analyze:

  • --file PATH - path to a single data file (repeatable)
  • --dir PATH - path to a directory to search recursively (repeatable)
  • --file-type {json,lrc} - type of data files to search for (default: json)

Both .json (human-readable) and .lrc (LNST binary/compressed) file formats are supported. The .lrc format requires the lnst package and provides direct access to the full recipe object hierarchy.

Description

By default the tool prints the full recipe name and all parameters. Use --brief for a one-line summary showing only the key parameters:

$ lnst-evaluate-helper --dir results/ --brief
results/0_SimpleNetworkRecipe/run-data-0.json: SimpleNetworkRecipe driver=mlx5_core ip_versions=('ipv4',) perf_tests=('tcp_stream',) perf_parallel_processes=1 perf_parallel_streams=1

Evaluations

Evaluations are added with --evaluation TYPE and can be specified multiple times. Each evaluation produces a PASS/FAIL result per file.

basic

Checks that every result entry in the file reports PASS.

lnst-evaluate-helper --dir results/ --brief --evaluation basic

flow-nonzero

Checks that every flow measurement has non-zero receiver throughput.

lnst-evaluate-helper --dir results/ --brief --evaluation flow-nonzero

flow-manual-threshold

Evaluates each flow measurement's receiver throughput against a manually specified threshold. Requires an --eval-param threshold=VALUE.

The threshold can be an absolute rate:

lnst-evaluate-helper --dir results/ --brief \
    --evaluation flow-manual-threshold --eval-param threshold=9gbps

Or a percentage of line rate (requires --eval-param line_rate=VALUE):

lnst-evaluate-helper --dir results/ --brief \
    --evaluation flow-manual-threshold --eval-param threshold=90% --eval-param line_rate=25gbps

Supported rate suffixes: bps, kbps, mbps, gbps, tbps.

Evaluation parameters

--eval-param KEY=VALUE applies to the immediately preceding --evaluation. This allows different evaluations to have independent parameters:

lnst-evaluate-helper --dir results/ --brief \
    --evaluation flow-manual-threshold --eval-param threshold=9gbps \
    --evaluation flow-nonzero

Here threshold=9gbps applies only to flow-manual-threshold, not to flow-nonzero.

Output format

  • --output-format human (default) - human-readable text with per-flow details
  • --output-format json - machine-readable JSON with structured data including recipe name, all parameters, and per-flow evaluation results with raw numeric values (bps, cpu percentages)

Project structure

lnst_evaluate_helper/
    main.py              # CLI entry point, argument parsing, output formatting
    interpreters/
        json.py          # Interpreter for .json result files
        lrc.py           # Interpreter for .lrc (LNST binary) result files

Each interpreter implements:

  • parse_test(file) - extracts recipe name and parameters
  • describe_test(file, brief) - formats a human-readable description
  • evaluate_test(file, evaluations) - runs evaluations and returns structured results

About

Project that helps with post-run result evaluation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages