Serial Runner#

Serial execution of pipeline jobs.

This module provides the SerialRunner class for executing pipeline jobs sequentially on a single device. It processes jobs one at a time in order.

Example

>>> from anomalib.pipelines.components.runners import SerialRunner
>>> from anomalib.pipelines.components.base import JobGenerator
>>> generator = JobGenerator()
>>> runner = SerialRunner(generator)
>>> results = runner.run({"param": "value"})

The serial runner handles:

  • Sequential execution of jobs in order

  • Progress tracking with progress bars

  • Result collection and combination

  • Error handling for failed jobs

This is useful when:

  • Resources are limited to a single device

  • Jobs need to be executed in a specific order

  • Debugging pipeline execution

  • Simple workflows that don’t require parallelization

The runner implements the Runner interface defined in anomalib.pipelines.components.base.

exception anomalib.pipelines.components.runners.serial.SerialExecutionError#

Bases: Exception

Error when running a job serially.

class anomalib.pipelines.components.runners.serial.SerialRunner(generator)#

Bases: Runner

Serial executor for running jobs sequentially.

This runner executes jobs one at a time in a sequential manner. It provides progress tracking and error handling while running jobs serially.

Parameters:

generator (JobGenerator) – Generator that creates jobs to be executed.

Example

Create a runner and execute jobs sequentially:

>>> from anomalib.pipelines.components.runners import SerialRunner
>>> from anomalib.pipelines.components.base import JobGenerator
>>> generator = JobGenerator()
>>> runner = SerialRunner(generator)
>>> results = runner.run({"param": "value"})
The runner handles:
  • Sequential execution of jobs

  • Progress tracking with progress bars

  • Result collection and combination

  • Error handling for failed jobs

run(args, prev_stage_results=None)#

Execute jobs sequentially and gather results.

This method runs each job one at a time, collecting results and handling any failures that occur during execution.

Parameters:
  • args (dict) –

    Arguments specific to the job. For example, if there is a pipeline defined where one of the job generators is hyperparameter optimization, then the pipeline configuration file will look something like:

    arg1:
    arg2:
    hpo:
        param1:
        param2:
        ...
    

    In this case, args will receive a dictionary with all keys under hpo.

  • prev_stage_results (PREV_STAGE_RESULT, optional) – Results from the previous pipeline stage. Used when the current stage depends on previous results. Defaults to None.

Returns:

Combined results from all executed jobs.

Return type:

GATHERED_RESULTS

Raises:

SerialExecutionError – If any job fails during execution.