Base Video Data#

Base Video Dataset.

class anomalib.data.base.video.AnomalibVideoDataModule(train_batch_size, eval_batch_size, num_workers, val_split_mode, val_split_ratio, test_split_mode=None, test_split_ratio=None, image_size=None, transform=None, train_transform=None, eval_transform=None, seed=None)#

Bases: AnomalibDataModule

Base class for video data modules.

class anomalib.data.base.video.AnomalibVideoDataset(task, clip_length_in_frames, frames_between_clips, transform=None, target_frame=VideoTargetFrame.LAST)#

Bases: AnomalibDataset, ABC

Base video anomalib dataset class.

Parameters:
  • task (str) – Task type, either ‘classification’ or ‘segmentation’

  • clip_length_in_frames (int) – Number of video frames in each clip.

  • frames_between_clips (int) – Number of frames between each consecutive video clip.

  • transform (Transform, optional) – Transforms that should be applied to the input clips. Defaults to None.

  • target_frame (VideoTargetFrame) – Specifies the target frame in the video clip, used for ground truth retrieval. Defaults to VideoTargetFrame.LAST.

property samples: DataFrame#

Get the samples dataframe.

class anomalib.data.base.video.VideoTargetFrame(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)#

Bases: str, Enum

Target frame for a video-clip.

Used in multi-frame models to determine which frame’s ground truth information will be used.