FastFlow#
FastFlow Lightning Model Implementation.
This module provides a PyTorch Lightning implementation of the FastFlow model for anomaly detection. FastFlow is a fast flow-based model that uses normalizing flows to model the distribution of features extracted from a pre-trained CNN backbone.
The model achieves competitive performance while maintaining fast inference times by leveraging normalizing flows to transform feature distributions into a simpler form that can be efficiently modeled.
Example
>>> from anomalib.data import MVTec
>>> from anomalib.models import Fastflow
>>> from anomalib.engine import Engine
>>> datamodule = MVTec()
>>> model = Fastflow()
>>> engine = Engine()
>>> engine.fit(model, datamodule=datamodule)
>>> predictions = engine.predict(model, datamodule=datamodule)
- Paper:
- Title: FastFlow: Unsupervised Anomaly Detection and Localization via 2D
Normalizing Flows
See also
anomalib.models.image.fastflow.torch_model.FastflowModel
:PyTorch implementation of the FastFlow model architecture.
anomalib.models.image.fastflow.loss.FastflowLoss
:Loss function used to train the FastFlow model.
- class anomalib.models.image.fastflow.lightning_model.Fastflow(backbone='resnet18', pre_trained=True, flow_steps=8, conv3x3_only=False, hidden_ratio=1.0, pre_processor=True, post_processor=True, evaluator=True, visualizer=True)#
Bases:
AnomalibModule
PL Lightning Module for the FastFlow algorithm.
The FastFlow model uses normalizing flows to transform feature distributions from a pre-trained CNN backbone into a simpler form that can be efficiently modeled for anomaly detection.
- Parameters:
backbone (str) – Backbone CNN network architecture. Available options are
"resnet18"
,"wide_resnet50_2"
, etc. Defaults to"resnet18"
.pre_trained (bool, optional) – Whether to use pre-trained backbone weights. Defaults to
True
.flow_steps (int, optional) – Number of steps in the normalizing flow. Defaults to
8
.conv3x3_only (bool, optional) – Whether to use only 3x3 convolutions in the FastFlow model. Defaults to
False
.hidden_ratio (float, optional) – Ratio used to calculate hidden variable channels. Defaults to
1.0
.pre_processor (PreProcessor | bool, optional) – Pre-processor to use for input data. Defaults to
True
.post_processor (PostProcessor | bool, optional) – Post-processor to use for model outputs. Defaults to
True
.evaluator (Evaluator | bool, optional) – Evaluator to compute metrics. Defaults to
True
.visualizer (Visualizer | bool, optional) – Visualizer for model outputs. Defaults to
True
.
- Raises:
ValueError – If
input_size
is not provided during initialization.
Example
>>> from anomalib.models import Fastflow >>> model = Fastflow( ... backbone="resnet18", ... pre_trained=True, ... flow_steps=8 ... )
- static configure_evaluator()#
Default evaluator.
Override in subclass for model-specific evaluator behaviour.
- Return type:
- configure_optimizers()#
Configure optimizers for each decoder.
- Returns:
Adam optimizer for each decoder
- Return type:
Optimizer
- property learning_type: LearningType#
Return the learning type of the model.
- Returns:
Learning type of the model.
- Return type:
LearningType
- training_step(batch, *args, **kwargs)#
Perform the training step input and return the loss.
- Parameters:
(batch (batch) – dict[str, str | torch.Tensor]): Input batch
args – Additional arguments.
kwargs – Additional keyword arguments.
- Returns:
Dictionary containing the loss value.
- Return type:
STEP_OUTPUT
- validation_step(batch, *args, **kwargs)#
Perform the validation step and return the anomaly map.
- Parameters:
batch (dict[str, str | torch.Tensor]) – Input batch
args – Additional arguments.
kwargs – Additional keyword arguments.
- Returns:
batch dictionary containing anomaly-maps.
- Return type:
STEP_OUTPUT | None
FastFlow Torch Model Implementation.
- class anomalib.models.image.fastflow.torch_model.FastflowModel(input_size, backbone, pre_trained=True, flow_steps=8, conv3x3_only=False, hidden_ratio=1.0)#
Bases:
Module
FastFlow.
Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows.
- Parameters:
backbone (str) – Backbone CNN network
pre_trained (bool, optional) – Boolean to check whether to use a pre_trained backbone. Defaults to
True
.flow_steps (int, optional) – Flow steps. Defaults to
8
.conv3x3_only (bool, optinoal) – Use only conv3x3 in fast_flow model. Defaults to
False
.hidden_ratio (float, optional) – Ratio to calculate hidden var channels. Defaults to
1.0
.
- Raises:
ValueError – When the backbone is not supported.
- forward(input_tensor)#
Forward-Pass the input to the FastFlow Model.
- Parameters:
input_tensor (torch.Tensor) – Input tensor.
- Returns:
- During training, return
(hidden_variables, log-of-the-jacobian-determinants). During the validation/test, return the anomaly map.
- Return type:
Tensor | list[torch.Tensor] | tuple[list[torch.Tensor]]
Loss function for the FastFlow Model Implementation.
This module implements the loss function used to train the FastFlow model. The loss is computed based on the hidden variables and Jacobian determinants produced by the normalizing flow transformations.
Example
>>> from anomalib.models.image.fastflow.loss import FastflowLoss
>>> criterion = FastflowLoss()
>>> hidden_vars = [torch.randn(2, 64, 32, 32)] # from NF blocks
>>> jacobians = [torch.randn(2)] # log det jacobians
>>> loss = criterion(hidden_vars, jacobians)
>>> loss.shape
torch.Size([])
See also
anomalib.models.image.fastflow.torch_model.FastflowModel
:PyTorch implementation of the FastFlow model architecture.
- class anomalib.models.image.fastflow.loss.FastflowLoss(*args, **kwargs)#
Bases:
Module
FastFlow Loss Module.
Computes the negative log-likelihood loss used to train the FastFlow model. The loss combines the log-likelihood of the hidden variables with the log determinant of the Jacobian matrices from the normalizing flow transformations.
- static forward(hidden_variables, jacobians)#
Calculate the FastFlow loss.
The loss is computed as the negative log-likelihood of the hidden variables transformed by the normalizing flows, taking into account the Jacobian determinants of the transformations.
- Parameters:
hidden_variables (list[torch.Tensor]) – List of hidden variable tensors produced by the normalizing flow transformations. Each tensor has shape
(N, C, H, W)
whereN
is batch size.jacobians (list[torch.Tensor]) – List of log determinants of Jacobian matrices for each normalizing flow transformation. Each tensor has shape
(N,)
whereN
is batch size.
- Returns:
- Scalar loss value combining the negative log-likelihood
of hidden variables and Jacobian determinants.
- Return type:
Example
>>> criterion = FastflowLoss() >>> h_vars = [torch.randn(2, 64, 32, 32)] # hidden variables >>> jacs = [torch.randn(2)] # log det jacobians >>> loss = criterion(h_vars, jacs)
FastFlow Anomaly Map Generator Implementation.
This module implements the anomaly map generation for the FastFlow model. The generator takes hidden variables from normalizing flow blocks and produces an anomaly heatmap by computing flow maps.
Example
>>> from anomalib.models.image.fastflow.anomaly_map import AnomalyMapGenerator
>>> generator = AnomalyMapGenerator(input_size=(256, 256))
>>> hidden_vars = [torch.randn(1, 64, 32, 32)] # from NF blocks
>>> anomaly_map = generator(hidden_vars) # returns anomaly heatmap
- class anomalib.models.image.fastflow.anomaly_map.AnomalyMapGenerator(input_size)#
Bases:
Module
Generate anomaly heatmaps from FastFlow hidden variables.
The generator takes hidden variables from normalizing flow blocks and produces an anomaly heatmap. For each hidden variable tensor, it:
Computes negative log probability
Converts to probability via exponential
Interpolates to input size
Stacks and averages flow maps to produce final anomaly map
- Parameters:
input_size (ListConfig | tuple) – Target size for the anomaly map as
(height, width)
. IfListConfig
is provided, it will be converted to tuple.
Example
>>> generator = AnomalyMapGenerator(input_size=(256, 256)) >>> hidden_vars = [torch.randn(1, 64, 32, 32)] # from NF blocks >>> anomaly_map = generator(hidden_vars) >>> anomaly_map.shape torch.Size([1, 1, 256, 256])
- forward(hidden_variables)#
Generate anomaly heatmap from hidden variables.
This implementation generates the heatmap based on the flow maps computed from the normalizing flow (NF) FastFlow blocks. Each block yields a flow map, which overall is stacked and averaged to produce an anomaly map.
- The process for each hidden variable is:
Compute negative log probability as mean of squared values
Convert to probability via exponential
Interpolate to input size
Stack all flow maps and average to get final anomaly map
- Parameters:
hidden_variables (list[torch.Tensor]) – List of hidden variables from each NF FastFlow block. Each tensor has shape
(N, C, H, W)
.- Returns:
- Anomaly heatmap with shape
(N, 1, H, W)
where H, W
match theinput_size
.
- Anomaly heatmap with shape
- Return type: