EMSFomer: Efficient Multi-Scale Transformer for Real-Time Semantic Segmentation
Transformer-based models have achieved impressive performance in semantic segmentation in recent years. However, the multi-head self-attention mechanism in Transformers incurs significant computational overhead and becomes impractical for real-time applications due to its high complexity and large l...
Saved in:
Main Authors: | Zhengyu Xia, Joohee Kim |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10852306/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
PZS‐Net: Incorporating of Frame Sequence and Multi‐Scale Priors for Prostate Zonal Segmentation in Transrectal Ultrasound
by: Jianguo Ju, et al.
Published: (2025-01-01) -
Hybrid Offset Position Encoding for Large-Scale Point Cloud Semantic Segmentation
by: Yu Xiao, et al.
Published: (2025-01-01) -
Mix-layers semantic extraction and multi-scale aggregation transformer for semantic segmentation
by: Tianping Li, et al.
Published: (2024-11-01) -
MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution
by: Jiange Liu, et al.
Published: (2024-10-01) -
Construction of Multi-Scale Fusion Attention Unified Perceptual Parsing Networks for Semantic Segmentation of Mangrove Remote Sensing Images
by: Xin Wang, et al.
Published: (2025-01-01)