Tiling#
Image tiling utilities for processing large images.
This module provides functionality to:
Tile large images into smaller patches for efficient processing
Support overlapping and non-overlapping tiling strategies
Reconstruct original images from tiles
Handle upscaling and downscaling with padding or interpolation
Example
>>> from anomalib.data.utils.tiler import Tiler
>>> import torch
>>> # Create tiler with 256x256 tiles and 128 stride
>>> tiler = Tiler(tile_size=256, stride=128)
>>> # Create sample 512x512 image
>>> image = torch.rand(1, 3, 512, 512)
>>> # Generate tiles
>>> tiles = tiler.tile(image)
>>> tiles.shape
torch.Size([9, 3, 256, 256])
>>> # Reconstruct image from tiles
>>> reconstructed = tiler.untile(tiles)
>>> reconstructed.shape
torch.Size([1, 3, 512, 512])
- class anomalib.data.utils.tiler.ImageUpscaleMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)#
-
Mode for upscaling images.
- PADDING#
Upscale by padding with zeros
- INTERPOLATION#
Upscale using interpolation
- exception anomalib.data.utils.tiler.StrideSizeError#
Bases:
Exception
Error raised when stride size exceeds tile size.
- class anomalib.data.utils.tiler.Tiler(tile_size, stride=None, remove_border_count=0, mode=ImageUpscaleMode.PADDING)#
Bases:
object
Tile images into overlapping or non-overlapping patches.
This class provides functionality to: - Split large images into smaller tiles for efficient processing - Support overlapping tiles with configurable stride - Remove border pixels from tiles before reconstruction - Reconstruct original image from processed tiles
- Parameters:
tile_size (
int
|Sequence
) – Size of tiles as int or(height, width)
stride (
int
|Sequence
|None
) – Stride between tiles as int or(height, width)
. IfNone
, uses tile_size (non-overlapping)remove_border_count (
int
) – Number of border pixels to remove from tilesmode (
ImageUpscaleMode
) – Upscaling mode for resizing, either"padding"
or"interpolation"
Examples
>>> import torch >>> from torchvision import transforms >>> from skimage.data import camera >>> # Create tiler for 256x256 tiles with 128 stride >>> tiler = Tiler(tile_size=256, stride=128) >>> # Convert test image to tensor >>> image = transforms.ToTensor()(camera()) >>> # Generate tiles >>> tiles = tiler.tile(image) >>> image.shape, tiles.shape (torch.Size([3, 512, 512]), torch.Size([9, 3, 256, 256]))
>>> # Process tiles here...
>>> # Reconstruct image from tiles >>> reconstructed = tiler.untile(tiles) >>> reconstructed.shape torch.Size([1, 3, 512, 512])
- tile(image, use_random_tiling=False)#
Tile input image into patches.
- Parameters:
- Returns:
Generated tiles
- Return type:
Examples
>>> tiler = Tiler(tile_size=512, stride=256) >>> image = torch.rand(2, 3, 1024, 1024) >>> tiles = tiler.tile(image) >>> tiles.shape torch.Size([18, 3, 512, 512])
- Raises:
ValueError – If tile size exceeds image size
- untile(tiles)#
Reconstruct image from tiles.
For overlapping tiles, averages overlapping regions.
- Parameters:
tiles (
Tensor
) – Tiles generated bytile()
- Returns:
Reconstructed image
- Return type:
Examples
>>> tiler = Tiler(tile_size=512, stride=256) >>> image = torch.rand(2, 3, 1024, 1024) >>> tiles = tiler.tile(image) >>> reconstructed = tiler.untile(tiles) >>> reconstructed.shape torch.Size([2, 3, 1024, 1024]) >>> torch.equal(image, reconstructed) True
- static validate_size_type(parameter)#
Validate and convert size parameter to tuple.
- Parameters:
parameter (
int
|Sequence
) – Size as int or sequence of(height, width)
- Returns:
Validated size as
(height, width)
- Return type:
- Raises:
TypeError – If parameter type is invalid
ValueError – If parameter length is not 2
- anomalib.data.utils.tiler.compute_new_image_size(image_size, tile_size, stride)#
Compute new image size that is divisible by tile size and stride.
- Parameters:
- Returns:
New image size divisible by tile size and stride
- Return type:
Examples
>>> compute_new_image_size((512, 512), (256, 256), (128, 128)) (512, 512) >>> compute_new_image_size((512, 512), (222, 222), (111, 111)) (555, 555)
- anomalib.data.utils.tiler.downscale_image(image, size, mode=ImageUpscaleMode.PADDING)#
Downscale image to desired size.
- Parameters:
image (
Tensor
) – Input image tensorsize (
tuple
) – Target size as(height, width)
mode (
ImageUpscaleMode
) – Downscaling mode, either"padding"
or"interpolation"
- Returns:
Downscaled image
- Return type:
Examples
>>> x = torch.rand(1, 3, 512, 512) >>> y = upscale_image(x, (555, 555), "padding") >>> z = downscale_image(y, (512, 512), "padding") >>> torch.allclose(x, z) True
- anomalib.data.utils.tiler.upscale_image(image, size, mode=ImageUpscaleMode.PADDING)#
Upscale image to desired size using padding or interpolation.
- Parameters:
image (
Tensor
) – Input image tensorsize (
tuple
) – Target size as(height, width)
mode (
ImageUpscaleMode
) – Upscaling mode, either"padding"
or"interpolation"
- Returns:
Upscaled image
- Return type:
Examples
>>> image = torch.rand(1, 3, 512, 512) >>> upscaled = upscale_image(image, (555, 555), "padding") >>> upscaled.shape torch.Size([1, 3, 555, 555])
>>> upscaled = upscale_image(image, (555, 555), "interpolation") >>> upscaled.shape torch.Size([1, 3, 555, 555])