Chaotic moving video quality enhancement based on deep in-loop filtering

The Joint Video Experts Team (JVET) has announced the latest generation of the Versatile Video Coding (VVC, H.266) standard. The in-loop filter in VVC inherits the De-Blocking Filter (DBF) and Sample Adaptive Offset (SAO) of High Efficiency Video Coding (HEVC, H.265), and adds the Adaptive Loop Filt...

Full description

Saved in:
Bibliographic Details
Main Authors: Tong Tang, Yi Yang, Dapeng Wu, Ruyan Wang, Zhidu Li
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2024-12-01
Series:Digital Communications and Networks
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352864823001402
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Joint Video Experts Team (JVET) has announced the latest generation of the Versatile Video Coding (VVC, H.266) standard. The in-loop filter in VVC inherits the De-Blocking Filter (DBF) and Sample Adaptive Offset (SAO) of High Efficiency Video Coding (HEVC, H.265), and adds the Adaptive Loop Filter (ALF) to minimize the error between the original sample and the decoded sample. However, for chaotic moving video encoding with low bitrates, serious blocking artifacts still remain after in-loop filtering due to the severe quantization distortion of texture details. To tackle this problem, this paper proposes a Convolutional Neural Network (CNN) based VVC in-loop filter for chaotic moving video encoding with low bitrates. First, a blur-aware attention network is designed to perceive the blurring effect and to restore texture details. Then, a deep in-loop filtering method is proposed based on the blur-aware network to replace the VVC in-loop filter. Finally, experimental results show that the proposed method could averagely save 8.3% of bit consumption at similar subjective quality. Meanwhile, under close bit rate consumption, the proposed method could reconstruct more texture information, thereby significantly reducing the blocking artifacts and improving the visual quality.
ISSN:2352-8648