Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network

In order to solve the problem that the current image super-resolution model has too many parameters and high computational complexity, this paper proposes a lightweight hybrid attention network (LHAN). LHAN consists of three parts: shallow feature extraction, lightweight hybrid attention block (LHAB...

Full description

Saved in:
Bibliographic Details
Main Authors: Chu Yuezhong, Wang Kang, Zhang Xuefeng, Liu Heng
Format: Article
Language:English
Published: Wiley 2024-01-01
Series:Advances in Multimedia
Online Access:http://dx.doi.org/10.1155/2024/2293286
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832543538788696064
author Chu Yuezhong
Wang Kang
Zhang Xuefeng
Liu Heng
author_facet Chu Yuezhong
Wang Kang
Zhang Xuefeng
Liu Heng
author_sort Chu Yuezhong
collection DOAJ
description In order to solve the problem that the current image super-resolution model has too many parameters and high computational complexity, this paper proposes a lightweight hybrid attention network (LHAN). LHAN consists of three parts: shallow feature extraction, lightweight hybrid attention block (LHAB), and upsampling module. LHAB combines multiscale self-attention and large-core attention. In order to make the network lightweight, multiscale self-attention block (MSSAB) improves the self-attention mechanism and uses windows of different sizes for group calculations. At the same time, in large-core attention, we use depth-based attention. Separate convolutions are used to reduce parameters. While keeping the receptive field unchanged, a normal convolution and a dilated convolution are used to replace the large kernel convolution. The four times super-resolution experimental results on five data sets, including Set5 and Set14, show that our proposed method performs well in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Specifically, in the benchmark data set on Urban, compared with SwinIR, the PSNR index of our method is improved by 0.10 dB. In addition, the parameter amount and calculation amount (floating point operations (FLOPs)) of our method are reduced by 315K and 16.4 G, respectively. Our proposed LHAN not only reduces the number of parameters and calculations but also achieves excellent performance in reconstruction quality.
format Article
id doaj-art-0396c3feb1dc49d59d6a79f8524e69ee
institution Kabale University
issn 1687-5699
language English
publishDate 2024-01-01
publisher Wiley
record_format Article
series Advances in Multimedia
spelling doaj-art-0396c3feb1dc49d59d6a79f8524e69ee2025-02-03T11:38:00ZengWileyAdvances in Multimedia1687-56992024-01-01202410.1155/2024/2293286Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention NetworkChu Yuezhong0Wang Kang1Zhang Xuefeng2Liu Heng3School of Computer Science and TechnologySchool of Computer Science and TechnologySchool of Computer Science and TechnologySchool of Computer Science and TechnologyIn order to solve the problem that the current image super-resolution model has too many parameters and high computational complexity, this paper proposes a lightweight hybrid attention network (LHAN). LHAN consists of three parts: shallow feature extraction, lightweight hybrid attention block (LHAB), and upsampling module. LHAB combines multiscale self-attention and large-core attention. In order to make the network lightweight, multiscale self-attention block (MSSAB) improves the self-attention mechanism and uses windows of different sizes for group calculations. At the same time, in large-core attention, we use depth-based attention. Separate convolutions are used to reduce parameters. While keeping the receptive field unchanged, a normal convolution and a dilated convolution are used to replace the large kernel convolution. The four times super-resolution experimental results on five data sets, including Set5 and Set14, show that our proposed method performs well in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Specifically, in the benchmark data set on Urban, compared with SwinIR, the PSNR index of our method is improved by 0.10 dB. In addition, the parameter amount and calculation amount (floating point operations (FLOPs)) of our method are reduced by 315K and 16.4 G, respectively. Our proposed LHAN not only reduces the number of parameters and calculations but also achieves excellent performance in reconstruction quality.http://dx.doi.org/10.1155/2024/2293286
spellingShingle Chu Yuezhong
Wang Kang
Zhang Xuefeng
Liu Heng
Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
Advances in Multimedia
title Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
title_full Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
title_fullStr Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
title_full_unstemmed Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
title_short Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network
title_sort image super resolution reconstruction based on the lightweight hybrid attention network
url http://dx.doi.org/10.1155/2024/2293286
work_keys_str_mv AT chuyuezhong imagesuperresolutionreconstructionbasedonthelightweighthybridattentionnetwork
AT wangkang imagesuperresolutionreconstructionbasedonthelightweighthybridattentionnetwork
AT zhangxuefeng imagesuperresolutionreconstructionbasedonthelightweighthybridattentionnetwork
AT liuheng imagesuperresolutionreconstructionbasedonthelightweighthybridattentionnetwork