Feature extractors#

This guide demonstrates how different backbones can be used as feature extractors for anomaly detection models. Most of these models use Timm Feature Extractor except CSFLOW which uses TorchFx Feature Extractor. Here we show how to use API and CLI to use different backbones as feature extractors.

See also

For specifics of implementation refer to implementation classes Timm Feature Extractor and TorchFx Feature Extractor

Available backbones and layers#

Available Timm models are listed on Timm GitHub page.

In most cases, we want to use a pretrained backbone, so can get a list of all such models using the following code:

import timm
# list all pretrained models in timm
for model_name in timm.list_models(pretrained=True):

Once we have a model selected we can obtain available layer names using the following code:

import timm
model = timm.create_model("resnet18", features_only=True)
# Print module names
>>>['act1', 'layer1', 'layer2', 'layer3', 'layer4']

model = timm.create_model("mobilenetv3_large_100", features_only=True)
>>>['blocks.0.0', 'blocks.1.1', 'blocks.2.2', 'blocks.4.1', 'blocks.6.0']

We can then use selected model name and layer names with either API or using config file.

When using TorchFX for feature extraction, you can use either model name, custom model, or instance of model. In this guide, we will cover pretrained models from Torchvision passed by name. For use of the custom model or instance of a model refer to TorchFxFeatureExtractor class examples.

Available torchvision models are listed on Torchvision models page.

We can get layer names for selected model using the following code:

# Import model and function to list names
from torchvision.models import resnet18
from torchvision.models.feature_extraction import get_graph_node_names

# Make an instance of model with default (latest) weights
model = resnet18(weights="DEFAULT")

# Get and print node (layer) names
train_nodes, eval_nodes = get_graph_node_names(model)
>>>['x', 'conv1', 'bn1', 'relu', 'maxpool', ..., 'layer4.1.relu_1', 'avgpool', 'flatten', 'fc']

As a result, we get a list of all model nodes, which is quite long.

Now for example, if we want only output from the last node in the block named layer4, we specify layer4.1.relu_1. If we want to avoid writing layer4.1.relu_1 to get the last output of layer4 block, we can shorten it to layer4.

We can then use selected model name and layer names with either API or using config file.

See also

Additional info about TorchFX feature extraction can be found on PyTorch FX page and feature_extraction documentation page.


Some models might not support every backbone.

Backbone and layer selection#

When using API, we need to specify backbone and layers when instantiating the model with a non-default backbone.

 1# Import the required modules
 2from anomalib.data import MVTec
 3from anomalib.models import Padim
 4from anomalib.engine import Engine
 6# Initialize the datamodule, model, and engine
 7datamodule = MVTec(num_workers=0)
 8# Specify backbone and layers
 9model = Padim(backbone="resnet18", layers=["layer1", "layer2"])
10engine = Engine(image_metrics=["AUROC"], pixel_metrics=["AUROC"])
12# Train the model
13engine.fit(datamodule=datamodule, model=model)

In the following example config, we can see that we need to specify two parameters: the backbone and layers list.

 2  class_path: anomalib.models.Padim
 3  init_args:
 4    layers:
 5      - blocks.1.1
 6      - blocks.2.2
 7    input_size: null
 8    backbone: mobilenetv3_large_100
 9    pre_trained: true
10    n_features: 50

Then we can train using:

anomalib train --config <path/to/config>