Anomalib in 15 Minutes#
This section will walk you through the steps to train a model and use it to detect anomalies in a dataset.
Installation#
Installation is simple and can be done in two ways. The first is through PyPI, and the second is through a local installation. PyPI installation is recommended if you want to use the library without making any changes to the source code. If you want to make changes to the library, then a local installation is recommended.
Installing the Installer
Anomalib comes with a CLI installer that can be used to install the full package. The installer can be installed using the following commands:
# pip install anomalib
echo "=== Installing Anomalib from source ==="
echo -e "\n1. Create and activate a virtual environment"
echo "# Create a new virtual environment"
echo "$ python -m venv .venv"
# python -m venv .venv
echo "# Activate the virtual environment (Linux/macOS)"
echo "$ source .venv/bin/activate"
# source .venv/bin/activate
echo "# Activate the virtual environment (Windows)"
echo "$ .venv\\Scripts\\activate"
echo -e "\n2. Clone the repository"
echo "$ git clone https://github.com/openvinotoolkit/anomalib.git"
# git clone https://github.com/openvinotoolkit/anomalib.git
echo "$ cd anomalib"
# cd anomalib
echo -e "\n3. Install in development mode"
echo "# Install the base package in development mode"
echo "$ pip install -e ."
# pip install -e .
The main reason why PyPI and source installer does not install the full package is to keep the installation wheel small. The CLI installer also automates the installation such as finding the torch version with the right CUDA/CUDNN version.
The next section demonstrates how to install the full package using the CLI installer.
Installing the Full Package
After installing anomalib, you can install the full package using the following commands:
echo -e "\n=== Anomalib Installer Help ==="
echo "$ anomalib install -h"
echo '
╭─ Arguments ───────────────────────────────────────────────────────────────────╮
│ Usage: anomalib install [-h] [-v] [--option {core,full,openvino,dev}] │
│ │
│ Install the full-package for anomalib. │
│ │
│ Options: │
│ -h, --help Show this help message and exit. │
│ -v, --verbose Show verbose output during installation. │
│ --option {core,full,openvino,dev} │
│ Installation option to use. Options are: │
│ - core: Install only core dependencies │
│ - full: Install all dependencies │
│ - openvino: Install OpenVINO dependencies │
│ - dev: Install development dependencies │
│ (default: full) │
╰───────────────────────────────────────────────────────────────────────────────╯'
As can be seen above, the only available sub-command is install
at the moment.
The install
sub-command has options to install either the full package or the
specific components of the package.
By default the install
sub-command installs the full package. If you want to
install only the specific components of the package, you can use the --option
flag.
echo -e "\n2. Install core dependencies only"
echo "# For basic training and evaluation via Torch and Lightning"
echo "$ anomalib install --option core"
# anomalib install --option core
echo -e "\n3. Install full dependencies"
echo "# Includes all optional dependencies"
echo "$ anomalib install --option full"
# anomalib install --option full
echo -e "\n4. Install OpenVINO dependencies"
echo "# For edge deployment with smaller wheel size"
echo "$ anomalib install --option openvino"
# anomalib install --option openvino
echo -e "\n5. Install development dependencies"
echo "# For contributing to anomalib"
echo "$ anomalib install --option dev"
# anomalib install --option dev
echo -e "\n6. Install with verbose output"
echo "# Shows detailed installation progress"
echo "$ anomalib install -v"
# anomalib install -v
echo -e "\n=== Example Installation Output ==="
echo '
❯ anomalib install --option full
Installing anomalib with full dependencies...
Successfully installed anomalib and all dependencies.'
After following these steps, your environment will be ready to use anomalib!
Training#
Anomalib supports both API and CLI-based training. The API is more flexible and allows for more customization, while the CLI training utilizes command line interfaces, and might be easier for those who would like to use anomalib off-the-shelf.
# 1. Import required modules
from anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.models import EfficientAd
# 2. Create a dataset
# MVTec is a popular dataset for anomaly detection
datamodule = MVTec(
root="./datasets/MVTec", # Path to download/store the dataset
category="bottle", # MVTec category to use
train_batch_size=32, # Number of images per training batch
eval_batch_size=32, # Number of images per validation/test batch
num_workers=8, # Number of parallel processes for data loading
)
# 3. Initialize the model
# EfficientAd is a good default choice for beginners
model = EfficientAd()
# 4. Create the training engine
engine = Engine(max_epochs=10) # Train for 10 epochs
# 5. Train the model
engine.fit(datamodule=datamodule, model=model)
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Getting Started with Anomalib Training
# ------------------------------------
# This example shows the basic steps to train an anomaly detection model.
# 1. Basic Training
# Train a model using default configuration (recommended for beginners)
echo "Training with default configuration..."
anomalib train --model efficient_ad
# 2. Training with Basic Customization
# Customize basic parameters like batch size and epochs
echo -e "\nTraining with custom parameters..."
anomalib train --model efficient_ad \
--data.train_batch_size 32 \
--trainer.max_epochs 10
# 3. Using a Different Dataset
# Train on a specific category of MVTec dataset
echo -e "\nTraining on MVTec bottle category..."
anomalib train --model efficient_ad \
--data.category bottle
Inference#
Anomalib includes multiple inferencing scripts, including Torch, Lightning, Gradio, and OpenVINO inferencers to perform inference using the trained/exported model. Here we show an inference example using the Lightning inferencer.
Lightning Inference
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Getting Started with Anomalib Inference using the Python API.
This example shows how to perform inference on a trained model
using the Anomalib Python API.
"""
# 1. Import required modules
from pathlib import Path
from anomalib.data import PredictDataset
from anomalib.engine import Engine
from anomalib.models import EfficientAd
# 2. Initialize the model and load weights
model = EfficientAd()
engine = Engine()
# 3. Prepare test data
# You can use a single image or a folder of images
dataset = PredictDataset(
path=Path("path/to/test/images"),
image_size=(256, 256),
)
# 4. Get predictions
predictions = engine.predict(
model=model,
dataset=dataset,
ckpt_path="path/to/model.ckpt",
)
# 5. Access the results
if predictions is not None:
for prediction in predictions:
image_path = prediction.image_path
anomaly_map = prediction.anomaly_map # Pixel-level anomaly heatmap
pred_label = prediction.pred_label # Image-level label (0: normal, 1: anomalous)
pred_score = prediction.pred_score # Image-level anomaly score
#!/usr/bin/env bash
# shellcheck shell=bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Getting Started with Anomalib Inference
# This example shows how to perform inference using Engine().predict() arguments.
echo "=== Anomalib Inference Examples ==="
echo -e "\n1. Basic Inference with Checkpoint Path"
echo "# Predict using a model checkpoint"
anomalib predict \
--ckpt_path "./results/efficient_ad/mvtec/bottle/weights/model.ckpt" \
--data_path path/to/image.jpg
echo -e "\n2. Inference with Directory Path"
echo "# Predict on all images in a directory"
anomalib predict \
--ckpt_path "./results/efficient_ad/mvtec/bottle/weights/model.ckpt" \
--data_path "./datasets/mvtec/bottle/test"
echo -e "\n3. Inference with Datamodule"
echo "# Use a datamodule for inference"
anomalib predict \
--ckpt_path "./results/my_dataset/weights/model.ckpt" \
--datamodule.class_path anomalib.data.Folder \
--datamodule.init_args.name "my_dataset" \
--datamodule.init_args.root "./datasets/my_dataset" \
--datamodule.init_args.normal_dir "good" \
--datamodule.init_args.abnormal_dir "defect"
echo -e "\n4. Inference with Return Predictions"
echo "# Return predictions instead of saving to disk"
anomalib predict \
--ckpt_path "./results/efficient_ad/mvtec/bottle/weights/model.ckpt" \
--data_path path/to/image.jpg \
--return_predictions
echo -e "\n=== Example Output ==="
echo '
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[2024-01-01 12:00:00][INFO][anomalib][predict]: Loading model from ./results/my_dataset/weights/model.ckpt
[2024-01-01 12:00:01][INFO][anomalib][predict]: Prediction started
[2024-01-01 12:00:02][INFO][anomalib][predict]: Predictions saved to ./results/my_dataset/predictions'
echo -e "\nNote: Replace paths according to your setup."
echo "The predictions will be saved in the results directory by default unless --return_predictions is used."
Torch Inference
OpenVINO Inference
Gradio Inference
Hyper-Parameter Optimization#
Anomalib supports hyper-parameter optimization using wandb and comet.ml. Here we show an example of hyper-parameter optimization using both comet and wandb.
# To perform hpo using wandb sweep
anomalib hpo --backend WANDB --sweep_config tools/hpo/configs/wandb.yaml
# To perform hpo using comet.ml sweep
anomalib hpo --backend COMET --sweep_config tools/hpo/configs/comet.yaml
# To be enabled in v1.1
Experiment Management#
Anomalib is integrated with various libraries for experiment tracking such as comet, tensorboard, and wandb through lighting loggers.
To run a training experiment with experiment tracking, you will need the following configuration file:
By using the configuration file above, you can run the experiment with the following command:
Benchmarking#
Anomalib provides a benchmarking tool to evaluate the performance of the anomaly detection models on a given dataset. The benchmarking tool can be used to evaluate the performance of the models on a given dataset, or to compare the performance of multiple models on a given dataset.
Each model in anomalib is benchmarked on a set of datasets, and the results are available in src/anomalib/models/<model_name>README.md
. For example, the MVTec AD results for the Patchcore model are available in the corresponding README.md file.
To run the benchmarking tool, run the following command:
anomalib benchmark --config tools/benchmarking/benchmark_params.yaml
Reference#
If you use this library and love it, use this to cite it:
@inproceedings{akcay2022anomalib,
title={Anomalib: A deep learning library for anomaly detection},
author={
Akcay, Samet and
Ameln, Dick and
Vaidya, Ashwin and
Lakshmanan, Barath
and Ahuja, Nilesh
and Genc, Utku
},
booktitle={2022 IEEE International Conference on Image Processing (ICIP)},
pages={1706--1710},
year={2022},
organization={IEEE}
}