Benchmarking Pipeline#
The benchmarking pipeline allows you to run multiple models across combination of parameters and dataset categories to collect metrics. The benchmarking run is configured using a config file that specifies the grid-search parameters. A sample config file is shown below:
accelerator:
- cuda
- cpu
benchmark:
seed: 42
model:
class_path:
grid_search: [Padim, Patchcore]
data:
class_path: MVTec
init_args:
category:
grid:
- bottle
- cable
- capsule
The accelerator
parameter is specific to the pipeline and is used to configure the runners. When cuda
is passed it adds a parallel runner with number of jobs equal to the number of cuda devices. The idea is that since job is independent, we can increase the throughput by distributing each on an individual accelerator. The cpu
jobs are run serially.
Running the Benchmark Pipeline#
There are two ways to run the benchmark pipeline; as a subcommand, or as a standalone entrypoint.
CLI
# Using Anomalib entrypoint
anomalib benchmark --config tools/experimental/benchmarking/sample.yaml
# Using Entrypoint in tools
python tools/experimental/benchmarking/benchmark.py --config tools/experimental/benchmarking/sample.yaml
Benchmark Pipeline Class#
Benchmark Job
Benchmark Job Generator