Benchmarking Pipeline#

The benchmarking pipeline allows you to run multiple models across combination of parameters and dataset categories to collect metrics. The benchmarking run is configured using a config file that specifies the grid-search parameters. A sample config file is shown below:

accelerator:
  - cuda
  - cpu
benchmark:
  seed: 42
  model:
    class_path:
      grid_search: [Padim, Patchcore]
  data:
    class_path: MVTec
    init_args:
      category:
        grid:
          - bottle
          - cable
          - capsule

The accelerator parameter is specific to the pipeline and is used to configure the runners. When cuda is passed it adds a parallel runner with number of jobs equal to the number of cuda devices. The idea is that since job is independent, we can increase the throughput by distributing each on an individual accelerator. The cpu jobs are run serially.

Running the Benchmark Pipeline#

There are two ways to run the benchmark pipeline; as a subcommand, or as a standalone entrypoint.

CLI
# Using Anomalib entrypoint
anomalib benchmark --config tools/experimental/benchmarking/sample.yaml
# Using Entrypoint in tools
python tools/experimental/benchmarking/benchmark.py --config tools/experimental/benchmarking/sample.yaml

Benchmark Pipeline Class#

class anomalib.pipelines.benchmark.pipeline.Benchmark#

Bases: Pipeline

Benchmarking pipeline for evaluating anomaly detection models.

This pipeline handles running benchmarking experiments that evaluate and compare multiple anomaly detection models. It supports both serial and parallel execution depending on available hardware.

Example

>>> from anomalib.pipelines import Benchmark
>>> from anomalib.data import MVTec
>>> from anomalib.models import Padim, Patchcore
>>> # Initialize benchmark with models and datasets
>>> benchmark = Benchmark(
...     models=[Padim(), Patchcore()],
...     datasets=[MVTec(category="bottle"), MVTec(category="cable")]
... )
>>> # Run benchmark
>>> results = benchmark.run()
static get_parser(parser=None)#

Create a new parser if none is provided.

Return type:

ArgumentParser

run(args=None)#

Run the pipeline.

Parameters:

args (Namespace) – Arguments to run the pipeline. These are the args returned by ArgumentParser.

Return type:

None

Job

Benchmark Job

Benchmark Job
Generator

Benchmark Job Generator

Benchmark Job Generator