EMSFomer: Efficient Multi-Scale Transformer for Real-Time Semantic Segmentation
Transformer-based models have achieved impressive performance in semantic segmentation in recent years. However, the multi-head self-attention mechanism in Transformers incurs significant computational overhead and becomes impractical for real-time applications due to its high complexity and large l...
Saved in:
| Main Authors: | Zhengyu Xia, Joohee Kim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10852306/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Dual Attention Dual-Resolution Networks for Real-Time Semantic Segmentation of Street Scenes
by: Baofeng Ye, et al.
Published: (2025-01-01) -
Multi-scheme cross-level attention embedded U-shape transformer for MRI semantic segmentation
by: Qiang Wang, et al.
Published: (2025-07-01) -
DLNet: A Dual-Level Network with Self- and Cross-Attention for High-Resolution Remote Sensing Segmentation
by: Weijun Meng, et al.
Published: (2025-03-01) -
Global–Local Query-Support Cross-Attention for Few-Shot Semantic Segmentation
by: Fengxi Xie, et al.
Published: (2024-09-01) -
AFENet: An Attention-Focused Feature Enhancement Network for the Efficient Semantic Segmentation of Remote Sensing Images
by: Jiarui Li, et al.
Published: (2024-11-01)