brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder module
- class brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.Attention(dim: int, num_heads: int = 8, qkv_bias: bool = True, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, input_size: Tuple[int, int] | None = None)
Bases:
Module
Multi-head Attention block with relative position embeddings.
- forward(x: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.Block(dim: int, num_heads: int, mlp_ratio: float = 4.0, qkv_bias: bool = True, norm_layer: ~typing.Type[~torch.nn.modules.module.Module] = <class 'torch.nn.modules.normalization.LayerNorm'>, act_layer: ~typing.Type[~torch.nn.modules.module.Module] = <class 'torch.nn.modules.activation.GELU'>, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, window_size: int = 0, input_size: ~typing.Tuple[int, int] | None = None)
Bases:
Module
Transformer blocks with support of window attention and residual propagation blocks
- forward(x: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.ImageEncoderViT(img_size: int = 1024, patch_size: int = 16, in_chans: int = 3, embed_dim: int = 768, depth: int = 12, num_heads: int = 12, mlp_ratio: float = 4.0, out_chans: int = 256, qkv_bias: bool = True, norm_layer: ~typing.Type[~torch.nn.modules.module.Module] = <class 'torch.nn.modules.normalization.LayerNorm'>, act_layer: ~typing.Type[~torch.nn.modules.module.Module] = <class 'torch.nn.modules.activation.GELU'>, use_abs_pos: bool = True, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, window_size: int = 0, global_attn_indexes: ~typing.Tuple[int, ...] = ())
Bases:
Module
- forward(x: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.PatchEmbed(kernel_size: Tuple[int, int] = (16, 16), stride: Tuple[int, int] = (16, 16), padding: Tuple[int, int] = (0, 0), in_chans: int = 3, embed_dim: int = 768)
Bases:
Module
Image to Patch Embedding.
- forward(x: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.add_decomposed_rel_pos(attn: Tensor, q: Tensor, rel_pos_h: Tensor, rel_pos_w: Tensor, q_size: Tuple[int, int], k_size: Tuple[int, int]) Tensor
Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`. https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950 Args:
attn (Tensor): attention map. q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C). rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis. rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis. q_size (Tuple): spatial sequence size of query q with (q_h, q_w). k_size (Tuple): spatial sequence size of key k with (k_h, k_w).
- Returns:
attn (Tensor): attention map with added relative positional embeddings.
- brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.get_rel_pos(q_size: int, k_size: int, rel_pos: Tensor) Tensor
- Get relative positional embeddings according to the relative positions of
query and key sizes.
- Args:
q_size (int): size of query q. k_size (int): size of key k. rel_pos (Tensor): relative position embeddings (L, C).
- Returns:
Extracted positional embeddings according to relative positions.
- brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.window_partition(x: Tensor, window_size: int) Tuple[Tensor, Tuple[int, int]]
Partition into non-overlapping windows with padding if needed. Args:
x (tensor): input tokens with [B, H, W, C]. window_size (int): window size.
- Returns:
windows: windows after partition with [B * num_windows, window_size, window_size, C]. (Hp, Wp): padded height and width before partition
- brails.processors.vlm_segmenter.segment_anything.modeling.image_encoder.window_unpartition(windows: Tensor, window_size: int, pad_hw: Tuple[int, int], hw: Tuple[int, int]) Tensor
Window unpartition into original sequences and removing padding. Args:
windows (tensor): input tokens with [B * num_windows, window_size, window_size, C]. window_size (int): window size. pad_hw (Tuple): padded height and width (Hp, Wp). hw (Tuple): original height and width (H, W) before padding.
- Returns:
x: unpartitioned sequences with [B, H, W, C].