Feature enhanced cascading attention network for lightweight image super-resolution
Abstract Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource co...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-85548-4 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832594802493882368 |
---|---|
author | Feng Huang Hongwei Liu Liqiong Chen Ying Shen Min Yu |
author_facet | Feng Huang Hongwei Liu Liqiong Chen Ying Shen Min Yu |
author_sort | Feng Huang |
collection | DOAJ |
description | Abstract Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource constraints. Therefore, we propose a feature enhanced cascading attention network (FECAN) that introduces a novel feature enhanced cascading attention (FECA) mechanism, consisting of enhanced shuffle attention (ESA) and multi-scale large separable kernel attention (MLSKA). Specifically, ESA enhances high-frequency texture features in the feature maps, and MLSKA executes the further extraction. The rich and fine-grained high-frequency information are extracted and fused from multiple perceptual layers, thus improving super-resolution (SR) performance. To validate FECAN’s effectiveness, we evaluate it with different complexities by stacking different numbers of high-frequency enhancement modules (HFEM) that contain FECA. Extensive experiments on benchmark datasets demonstrate that FECAN outperforms state-of-the-art lightweight SR networks in terms of objective evaluation metrics and subjective visual quality. Specifically, at a × 4 scale with a 121 K model size, compared to the second-ranked MAN-tiny, FECAN achieves a 0.07 dB improvement in average peak signal-to-noise ratio (PSNR), while reducing network parameters by approximately 19% and FLOPs by 20%. This demonstrates a better trade-off between SR performance and model complexity. |
format | Article |
id | doaj-art-f7620572984c4a37b9282cb11645db85 |
institution | Kabale University |
issn | 2045-2322 |
language | English |
publishDate | 2025-01-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj-art-f7620572984c4a37b9282cb11645db852025-01-19T12:21:49ZengNature PortfolioScientific Reports2045-23222025-01-0115111810.1038/s41598-025-85548-4Feature enhanced cascading attention network for lightweight image super-resolutionFeng Huang0Hongwei Liu1Liqiong Chen2Ying Shen3Min Yu4College of Mechanical Engineering and Automation, Fuzhou UniversityCollege of Mechanical Engineering and Automation, Fuzhou UniversityCollege of Mechanical Engineering and Automation, Fuzhou UniversityCollege of Mechanical Engineering and Automation, Fuzhou UniversityZhongyu (Fujian) Digital Technology Co., LtdAbstract Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource constraints. Therefore, we propose a feature enhanced cascading attention network (FECAN) that introduces a novel feature enhanced cascading attention (FECA) mechanism, consisting of enhanced shuffle attention (ESA) and multi-scale large separable kernel attention (MLSKA). Specifically, ESA enhances high-frequency texture features in the feature maps, and MLSKA executes the further extraction. The rich and fine-grained high-frequency information are extracted and fused from multiple perceptual layers, thus improving super-resolution (SR) performance. To validate FECAN’s effectiveness, we evaluate it with different complexities by stacking different numbers of high-frequency enhancement modules (HFEM) that contain FECA. Extensive experiments on benchmark datasets demonstrate that FECAN outperforms state-of-the-art lightweight SR networks in terms of objective evaluation metrics and subjective visual quality. Specifically, at a × 4 scale with a 121 K model size, compared to the second-ranked MAN-tiny, FECAN achieves a 0.07 dB improvement in average peak signal-to-noise ratio (PSNR), while reducing network parameters by approximately 19% and FLOPs by 20%. This demonstrates a better trade-off between SR performance and model complexity.https://doi.org/10.1038/s41598-025-85548-4Lightweight image super-resolutionConvolution neural networkEnhanced shuffle attentionMulti-scale large separable kernel attention |
spellingShingle | Feng Huang Hongwei Liu Liqiong Chen Ying Shen Min Yu Feature enhanced cascading attention network for lightweight image super-resolution Scientific Reports Lightweight image super-resolution Convolution neural network Enhanced shuffle attention Multi-scale large separable kernel attention |
title | Feature enhanced cascading attention network for lightweight image super-resolution |
title_full | Feature enhanced cascading attention network for lightweight image super-resolution |
title_fullStr | Feature enhanced cascading attention network for lightweight image super-resolution |
title_full_unstemmed | Feature enhanced cascading attention network for lightweight image super-resolution |
title_short | Feature enhanced cascading attention network for lightweight image super-resolution |
title_sort | feature enhanced cascading attention network for lightweight image super resolution |
topic | Lightweight image super-resolution Convolution neural network Enhanced shuffle attention Multi-scale large separable kernel attention |
url | https://doi.org/10.1038/s41598-025-85548-4 |
work_keys_str_mv | AT fenghuang featureenhancedcascadingattentionnetworkforlightweightimagesuperresolution AT hongweiliu featureenhancedcascadingattentionnetworkforlightweightimagesuperresolution AT liqiongchen featureenhancedcascadingattentionnetworkforlightweightimagesuperresolution AT yingshen featureenhancedcascadingattentionnetworkforlightweightimagesuperresolution AT minyu featureenhancedcascadingattentionnetworkforlightweightimagesuperresolution |