1 | 【YOLOv11改进 - 注意力机制】LSKA(Large Separable Kernel Attention):大核分离卷积注意力模块 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143037077 |
2 | 【YOLOv11改进 - 注意力机制】EMA(Efficient Multi-Scale Attention):基于跨空间学习的高效多尺度注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143041421 |
3 | 【YOLOv11改进 - 注意力机制】 MSDA(Multi-Scale Dilated Attention):多尺度空洞注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143052542 |
4 | 【YOLOv11改进 - 注意力机制】iRMB: 倒置残差移动块,即插即用的轻量注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143052993 |
5 | 【YOLOv11改进 - 注意力机制】CoTAttention:上下文转换器注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143099154 |
6 | 【YOLOv11改进 - 注意力机制】GAM(Global Attention Mechanism):全局注意力机制,减少信息损失并放大全局维度交互特征 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143099416 |
7 | 【YOLOv11改进 - 注意力机制】MLCA(Mixed local channel attention):混合局部通道注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143099740 |
8 | 【YOLOv11改进 - 注意力机制】 MHSA:多头自注意力(Multi-Head Self-Attention) | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143107350 |
9 | 【YOLOv11改进 - 注意力机制】SimAM:轻量级注意力机制,解锁卷积神经网络新潜力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143107915 |
10 | 【YOLOv11改进 - 注意力机制】NAM:基于归一化的注意力模块,将权重稀疏惩罚应用于注意力机制中,提高效率性能 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143113290 |
11 | 【YOLOv11改进 - 注意力机制】SKAttention:聚合分支信息,实现自适应调整感受野大小 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143114359 |
12 | 【YOLOv11改进 - 注意力机制】DoubleAttention: 双重注意力机制,全局特征聚合和分配 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143134797 |
13 | 【YOLOv11改进 - 注意力机制】TripletAttention:轻量有效的三元注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143194262 |
14 | 【YOLOv11改进 - 注意力机制】ECA(Efficient Channel Attention):高效通道注意 模块,降低参数量 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143194441 |
15 | 【YOLOv11改进 - 注意力机制】MSCA: 多尺度卷积注意力,即插即用,助力小目标检测 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143194704 |
16 | 【YOLOv11改进 - 注意力机制】CoordAttention: 用于移动端的高效坐标注意力机制 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143228656 |
17 | 【YOLOv11改进 - 注意力机制】DAT(Deformable Attention):可变性注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143229191 |
18 | 【YOLOv11改进 - 注意力机制】ContextAggregation : 上下文聚合模块,捕捉局部和全局上下文,增强特征表示 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143231533 |
19 | 【YOLOv11改进 - 注意力机制】EffectiveSE : 改进的通道注意力模块,减少计算复杂性和信息丢失 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143233243 |
20 | 【YOLOv11改进 - 注意力机制】S2Attention : 整合空间位移和分割注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143234560 |
21 | 【YOLOv11改进 - 注意力机制】Polarized Self-Attention: 极化自注意力 ,更精细的双重注意力建模结构 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143234817 |
22 | 【YOLOv11改进 - 注意力机制】LSKNet(Large Selective Kernel Network ):空间选择注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143236398 |
23 | 【YOLOv11改进 - 注意力机制】CPCA(Channel prior convolutional attention)中的通道注意力,增强特征表征能力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143237411 |
24 | 【YOLOv11改进 - 注意力机制】MCA:用于图像识别的深度卷积神经网络中的多维协作注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143237904 |
25 | 【YOLOv11改进 - 注意力机制】HCF-Net 之 PPA:并行化注意力设计 | 小目标 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143248254 |
26 | 【YOLOv11改进 - 注意力机制】LS-YOLO MSFE:新颖的多尺度特征提取模块 | 小目标/遥感 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143248466 |
27 | 【YOLOv11改进 - 注意力机制】Sea_Attention: Squeeze-enhanced Axial Attention,结合全局语义提取和局部细节增强 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143249013 |
28 | 【YOLOv11改进 - 注意力机制】Non-Local:基于非局部均值去噪滤波的自注意力模型 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143249185 |
29 | 【YOLOv11改进 - 注意力机制】RCS-OSA :减少通道的空间对象注意力,高效且涨点 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143256430 |
30 | 【YOLOv11改进 -注意力机制】SGE(Spatial Group-wise Enhance):轻量级空间分组增强模块 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143257237 |
31 | 【YOLOv11改进 -注意力机制】Mamba之MLLAttention :基于Mamba和线性注意力Transformer的模型 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143258679 |
32 | 【YOLOv11改进 - 注意力机制】Gather-Excite : 提高网络捕获长距离特征交互的能力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143258799 |
33 | 【YOLOv11改进 - 注意力机制】 CAA: 上下文锚点注意力模块,处理尺度变化大或长形目标 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143259390 |
34 | 【YOLOv11改进 - 注意力机制】Focused Linear Attention :全新的聚焦线性注意力模块 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143259521 |
35 | 【YOLOv11改进 - 注意力机制】 CascadedGroupAttention:级联组注意力,增强视觉Transformer中多头自注意力机制的效率和有效性 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143261498 |
36 | 【YOLOv11改进 - 注意力机制】SENetV2: 用于通道和全局表示的聚合稠密层,结合SE模块和密集层来增强特征表示 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143261561 |
37 | 【YOLOv11改进 - 注意力机制】 CBAM:针对卷积神经网络(CNN)设计的新型注意力机制 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143261800 |
38 | 【YOLOv11改进 - 注意力机制】 ParNet :并行子网络结构实现低深度但高性能的神经网络架构 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143289302 |
39 | 【YOLOv11改进 - 注意力机制】GC Block (GlobalContext): 全局上下文块,高效捕获特征图中的全局依赖关系 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143315500 |
40 | 【YOLO11改进 - 注意力机制】Dual-ViT(Dual Vision Transformer):将自注意力的建模分解为对全局语义和细粒度特征的学习 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143321942 |
41 | 【YOLO11改进 - 注意力机制】HAT(Hybrid Attention Transformer,)超分辨率混合注意力机制 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143323349 |
42 | 【YOLO11改进 - 注意力机制】ELA(Efficient Local Attention):深度卷积神经网络的高效局部注意力机制 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143324666 |
43 | 【YOLO11改进 - 注意力机制】STA(Super Token Attention) 超级令牌注意力机制 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143350954 |
44 | 【YOLO11改进 - 注意力机制】Deformable-LKA Attention:可变形大核注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143351139 |
45 | 【YOLO11改进 - 注意力机制】BRA(bi-level routing attention )双层路由注意力 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143351336 |
46 | 【YOLO11改进 - 注意力机制】GCT(Gaussian Context Transformer):高斯上下文变换器 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143370762 |
47 | 【YOLO11改进 - 注意力机制】HCF-Net 之 MDCR:多稀释通道细化器模块 ,以不同的稀释率捕捉各种感受野大小的空间特征 | 小目标 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143390431 |
48 | 【YOLO11改进 - 注意力机制】HCF-Net 之 DASI: 维度感知选择性整合模块 | 小目标 | 注意力机制 | https://blog.csdn.net/shangyanaf/article/details/143407057 |
| | | |
| | | |
| | | |