FRE#

Feature Reconstruction Error (FRE) Algorithm Implementation.

FRE is an anomaly detection model that uses feature reconstruction error to detect anomalies. The model extracts features from a pre-trained CNN backbone and learns to reconstruct them using a tied autoencoder. Anomalies are detected by measuring the reconstruction error between the original and reconstructed features.

Example

>>> from anomalib.data import MVTec
>>> from anomalib.models import Fre
>>> from anomalib.engine import Engine
>>> datamodule = MVTec()
>>> model = Fre()
>>> engine = Engine()
>>> engine.fit(model, datamodule=datamodule)  
>>> predictions = engine.predict(model, datamodule=datamodule)  
Paper:
Title: FRE: Feature Reconstruction Error for Unsupervised Anomaly Detection

and Segmentation

URL: https://papers.bmvc2023.org/0614.pdf

See also

anomalib.models.image.fre.torch_model.FREModel:

PyTorch implementation of the FRE model architecture.

class anomalib.models.image.fre.lightning_model.Fre(backbone='resnet50', layer='layer3', pre_trained=True, pooling_kernel_size=2, input_dim=65536, latent_dim=220, pre_processor=True, post_processor=True, evaluator=True, visualizer=True)#

Bases: AnomalibModule

FRE: Feature-reconstruction error using Tied AutoEncoder.

The FRE model extracts features from a pre-trained CNN backbone and learns to reconstruct them using a tied autoencoder. Anomalies are detected by measuring the reconstruction error between original and reconstructed features.

Parameters:
  • backbone (str) – Backbone CNN network architecture. Defaults to "resnet50".

  • layer (str) – Layer name to extract features from the backbone CNN. Defaults to "layer3".

  • pre_trained (bool, optional) – Whether to use pre-trained backbone weights. Defaults to True.

  • pooling_kernel_size (int, optional) – Kernel size for pooling features extracted from the CNN. Defaults to 2.

  • input_dim (int, optional) – Dimension of features at output of specified layer. Defaults to 65536.

  • latent_dim (int, optional) – Reduced feature dimension after applying dimensionality reduction via shallow linear autoencoder. Defaults to 220.

  • pre_processor (PreProcessor | bool, optional) – Pre-processor to transform inputs before passing to model. Defaults to True.

  • post_processor (PostProcessor | bool, optional) – Post-processor to generate predictions from model outputs. Defaults to True.

  • evaluator (Evaluator | bool, optional) – Evaluator to compute metrics. Defaults to True.

  • visualizer (Visualizer | bool, optional) – Visualizer to display results. Defaults to True.

Example

>>> from anomalib.models import Fre
>>> model = Fre(
...     backbone="resnet50",
...     layer="layer3",
...     pre_trained=True,
...     pooling_kernel_size=2,
...     input_dim=65536,
...     latent_dim=220,
... )

See also

anomalib.models.image.fre.torch_model.FREModel:

PyTorch implementation of the FRE model architecture.

configure_optimizers()#

Configure optimizers.

Returns:

Adam optimizer for training the model.

Return type:

torch.optim.Optimizer

property learning_type: LearningType#

Return the learning type of the model.

Returns:

Learning type of the model (ONE_CLASS).

Return type:

LearningType

property trainer_arguments: dict[str, Any]#

Return FRE-specific trainer arguments.

Returns:

Dictionary of trainer arguments:
  • gradient_clip_val: 0

  • max_epochs: 220

  • num_sanity_val_steps: 0

Return type:

dict[str, Any]

training_step(batch, *args, **kwargs)#

Perform the training step of FRE.

For each batch, features are extracted from the CNN backbone and reconstructed using the tied autoencoder. The loss is computed as the MSE between original and reconstructed features.

Parameters:
  • batch (Batch) – Input batch containing images and labels.

  • args – Additional arguments (unused).

  • kwargs – Additional keyword arguments (unused).

Returns:

Dictionary containing the loss value.

Return type:

STEP_OUTPUT

validation_step(batch, *args, **kwargs)#

Perform the validation step of FRE.

Similar to training, features are extracted and reconstructed. The reconstruction error is used to compute anomaly scores and maps.

Parameters:
  • batch (Batch) – Input batch containing images and labels.

  • args – Additional arguments (unused).

  • kwargs – Additional keyword arguments (unused).

Returns:

Dictionary containing anomaly scores and maps.

Return type:

STEP_OUTPUT

PyTorch model for the Feature Reconstruction Error (FRE) algorithm implementation.

The FRE model extracts features from a pre-trained CNN backbone and learns to reconstruct them using a tied autoencoder. Anomalies are detected by measuring the reconstruction error between original and reconstructed features.

Example

>>> from anomalib.models.image.fre.torch_model import FREModel
>>> model = FREModel(
...     backbone="resnet50",
...     layer="layer3",
...     input_dim=65536,
...     latent_dim=220,
...     pre_trained=True,
...     pooling_kernel_size=4
... )
>>> input_tensor = torch.randn(32, 3, 256, 256)
>>> output = model(input_tensor)
>>> output.pred_score.shape
torch.Size([32])
>>> output.anomaly_map.shape
torch.Size([32, 1, 256, 256])
Paper:
Title: FRE: Feature Reconstruction Error for Unsupervised Anomaly Detection

and Segmentation

URL: https://papers.bmvc2023.org/0614.pdf

See also

anomalib.models.image.fre.lightning_model.Fre:

PyTorch Lightning implementation of the FRE model.

class anomalib.models.image.fre.torch_model.FREModel(backbone, layer, input_dim=65536, latent_dim=220, pre_trained=True, pooling_kernel_size=4)#

Bases: Module

Feature Reconstruction Error (FRE) model implementation.

The model extracts features from a pre-trained CNN backbone and learns to reconstruct them using a tied autoencoder. Anomalies are detected by measuring the reconstruction error between original and reconstructed features.

Parameters:
  • backbone (str) – Pre-trained CNN backbone architecture (e.g. "resnet18", "resnet50", etc.).

  • layer (str) – Layer name from which to extract features (e.g. "layer2", "layer3", etc.).

  • input_dim (int, optional) – Dimension of features at output of specified layer. Defaults to 65536.

  • latent_dim (int, optional) – Reduced feature dimension after applying dimensionality reduction via shallow linear autoencoder. Defaults to 220.

  • pre_trained (bool, optional) – Whether to use pre-trained backbone weights. Defaults to True.

  • pooling_kernel_size (int, optional) – Kernel size for pooling features extracted from the CNN. Defaults to 4.

Example

>>> model = FREModel(
...     backbone="resnet50",
...     layer="layer3",
...     input_dim=65536,
...     latent_dim=220
... )
>>> input_tensor = torch.randn(32, 3, 256, 256)
>>> output = model(input_tensor)
>>> output.pred_score.shape
torch.Size([32])
>>> output.anomaly_map.shape
torch.Size([32, 1, 256, 256])
forward(batch)#

Generate anomaly predictions for input images.

The method: 1. Extracts and reconstructs features using the tied autoencoder 2. Computes reconstruction error as anomaly scores 3. Generates pixel-wise anomaly maps 4. Upsamples anomaly maps to input image size

Parameters:

batch (torch.Tensor) – Input image batch of shape (N, C, H, W).

Returns:

Batch containing:
  • Anomaly scores of shape (N,)

  • Anomaly maps of shape (N, 1, H, W)

Return type:

InferenceBatch

get_features(batch)#

Extract and reconstruct features from the pretrained network.

Parameters:

batch (torch.Tensor) – Input image batch of shape (N, C, H, W).

Returns:

Tuple containing:
  • Original features of shape (N, D)

  • Reconstructed features of shape (N, D)

  • Original feature tensor shape (N, C, H, W)

where D is the flattened feature dimension.

Return type:

tuple[torch.Tensor, torch.Tensor, torch.Tensor]

class anomalib.models.image.fre.torch_model.TiedAE(input_dim, latent_dim)#

Bases: Module

Tied Autoencoder used for feature reconstruction error calculation.

The tied autoencoder uses shared weights between encoder and decoder to reduce the number of parameters while maintaining reconstruction capability.

Parameters:
  • input_dim (int) – Dimension of input features to the tied autoencoder.

  • latent_dim (int) – Dimension of the reduced latent space representation.

Example

>>> tied_ae = TiedAE(input_dim=1024, latent_dim=128)
>>> features = torch.randn(32, 1024)
>>> reconstructed = tied_ae(features)
>>> reconstructed.shape
torch.Size([32, 1024])
forward(features)#

Run input features through the autoencoder.

The features are first encoded to a lower dimensional latent space and then decoded back to the original feature space using transposed weights.

Parameters:

features (torch.Tensor) – Input feature batch of shape (N, input_dim).

Returns:

Reconstructed features of shape (N, input_dim).

Return type:

torch.Tensor