IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding
Image prefiltering with just noticeable distortion (JND) improves coding efficiency in a visual lossless way by filtering the perceptually redundant information prior to compression. However, real JND cannot be well modeled with inaccurate masking equations in traditional approaches or image-level s...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Open Journal of Circuits and Systems |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10365509/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832592859624112128 |
---|---|
author | Yu-Han Sun Chiang Lo-Hsuan Lee Tian-Sheuan Chang |
author_facet | Yu-Han Sun Chiang Lo-Hsuan Lee Tian-Sheuan Chang |
author_sort | Yu-Han Sun |
collection | DOAJ |
description | Image prefiltering with just noticeable distortion (JND) improves coding efficiency in a visual lossless way by filtering the perceptually redundant information prior to compression. However, real JND cannot be well modeled with inaccurate masking equations in traditional approaches or image-level subject tests in deep learning approaches. Thus, this paper proposes a fine-grained JND prefiltering dataset guided by image quality assessment for accurate block-level JND modeling. The dataset is constructed from decoded images to include coding effects and is also perceptually enhanced with block overlap and edge preservation. Furthermore, based on this dataset, we propose a lightweight JND prefiltering network, IQNet, which can be applied directly to different quantization cases with the same model and only needs 3K parameters. The experimental results show that the proposed approach to Versatile Video Coding could yield maximum/average bitrate savings of 41%/15% and 53%/19% for all-intra and low-delay P configurations, respectively, with negligible subjective quality loss. Our method demonstrates higher perceptual quality and a model size that is an order of magnitude smaller than previous deep learning methods. |
format | Article |
id | doaj-art-583f8aec48444b3aa698d8be2457005b |
institution | Kabale University |
issn | 2644-1225 |
language | English |
publishDate | 2024-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Circuits and Systems |
spelling | doaj-art-583f8aec48444b3aa698d8be2457005b2025-01-21T00:02:49ZengIEEEIEEE Open Journal of Circuits and Systems2644-12252024-01-015172710.1109/OJCAS.2023.334409410365509IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video CodingYu-Han Sun0Chiang Lo-Hsuan Lee1Tian-Sheuan Chang2https://orcid.org/0000-0002-0561-8745Institute of Electronics, National Yang Ming Chiao Tung University, Hsinchu, TaiwanInstitute of Electronics, National Yang Ming Chiao Tung University, Hsinchu, TaiwanInstitute of Electronics, National Yang Ming Chiao Tung University, Hsinchu, TaiwanImage prefiltering with just noticeable distortion (JND) improves coding efficiency in a visual lossless way by filtering the perceptually redundant information prior to compression. However, real JND cannot be well modeled with inaccurate masking equations in traditional approaches or image-level subject tests in deep learning approaches. Thus, this paper proposes a fine-grained JND prefiltering dataset guided by image quality assessment for accurate block-level JND modeling. The dataset is constructed from decoded images to include coding effects and is also perceptually enhanced with block overlap and edge preservation. Furthermore, based on this dataset, we propose a lightweight JND prefiltering network, IQNet, which can be applied directly to different quantization cases with the same model and only needs 3K parameters. The experimental results show that the proposed approach to Versatile Video Coding could yield maximum/average bitrate savings of 41%/15% and 53%/19% for all-intra and low-delay P configurations, respectively, with negligible subjective quality loss. Our method demonstrates higher perceptual quality and a model size that is an order of magnitude smaller than previous deep learning methods.https://ieeexplore.ieee.org/document/10365509/just noticeable distortionvideo quality assessmentvideo coding |
spellingShingle | Yu-Han Sun Chiang Lo-Hsuan Lee Tian-Sheuan Chang IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding IEEE Open Journal of Circuits and Systems just noticeable distortion video quality assessment video coding |
title | IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding |
title_full | IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding |
title_fullStr | IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding |
title_full_unstemmed | IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding |
title_short | IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding |
title_sort | iqnet image quality assessment guided just noticeable difference prefiltering for versatile video coding |
topic | just noticeable distortion video quality assessment video coding |
url | https://ieeexplore.ieee.org/document/10365509/ |
work_keys_str_mv | AT yuhansun iqnetimagequalityassessmentguidedjustnoticeabledifferenceprefilteringforversatilevideocoding AT chianglohsuanlee iqnetimagequalityassessmentguidedjustnoticeabledifferenceprefilteringforversatilevideocoding AT tiansheuanchang iqnetimagequalityassessmentguidedjustnoticeabledifferenceprefilteringforversatilevideocoding |