fightingcv

FightingCV Codebase For Attention,Backbone, MLP, Re-parameter, Convolution


Keywords
AttentionBackbone, attention, cbam, excitation-networks, linear-layers, paper, pytorch, squeeze, visual-tasks
License
Apache-2.0
Install
pip install fightingcv==1.0.1

Documentation

External-Attention-pytorch

Pytorch implementation of "Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks"

Overview

External Attention Usage

from ExternalAttention import ExternalAttention
import torch

input=torch.randn(50,49,512)
ea = ExternalAttention(d_model=512,S=8)
output=ea(input)
print(output.shape)

Self Attention Usage

from SelfAttention import ScaledDotProductAttention
import torch

input=torch.randn(50,49,512)
sa = ScaledDotProductAttention(d_model=512, d_k=512, d_v=512, h=8)
output=sa(input,input,input)
print(output.shape)