Study on video action recognition based on augment negative example multi-granularity discrimination model

An augment negative example discrimination paradigm based on contrastive learning was proposed to improve the model’s fine-grained discrimination ability of video actions. The most challenging video-text negative pairs was generated, forming an augmented negative example set for each video sample. B...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU Liangzhen, YANG Yang, XIA Yingjie, KUANG Li
Format: Article
Language:zho
Published: Editorial Department of Journal on Communications 2024-12-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2024268/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:An augment negative example discrimination paradigm based on contrastive learning was proposed to improve the model’s fine-grained discrimination ability of video actions. The most challenging video-text negative pairs was generated, forming an augmented negative example set for each video sample. Based on this paradigm, a multi-granularity discrimination model for video action recognition was proposed to further distinguish between positive and negative examples. In this model, video features were extracted by the video representation module guided by textual positive examples, while self-correlation relationships between positive and negative semantics were established by the semantic discriminator equipped with a self-attention mechanism. Meanwhile, a coarse-grained distinction between the video modality and the augmented negative example set was achieved, while a fine-grained distinction between positive examples and the augmented negative example set within the text modality was also accomplished. Experimental results demonstrate that the augment negative set improves the model’s recognition ability on fine-grained class labels, and the multi-granularity discrimination model outperforms current state-of-the-art methods on the Kinetics-400, HMDB51 and UCF101 datasets.
ISSN:1000-436X