Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module
Abstract Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic inf...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2024-11-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01642-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832571162295533568 |
---|---|
author | Diyou Li Lijuan Zhang Jie Huang Neal Xiong Lei Zhang Jian Wan |
author_facet | Diyou Li Lijuan Zhang Jie Huang Neal Xiong Lei Zhang Jian Wan |
author_sort | Diyou Li |
collection | DOAJ |
description | Abstract Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach. |
format | Article |
id | doaj-art-4a79ddabbf4d42db97b8a0879d67011a |
institution | Kabale University |
issn | 2199-4536 2198-6053 |
language | English |
publishDate | 2024-11-01 |
publisher | Springer |
record_format | Article |
series | Complex & Intelligent Systems |
spelling | doaj-art-4a79ddabbf4d42db97b8a0879d67011a2025-02-02T12:50:01ZengSpringerComplex & Intelligent Systems2199-45362198-60532024-11-0111111510.1007/s40747-024-01642-6Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention moduleDiyou Li0Lijuan Zhang1Jie Huang2Neal Xiong3Lei Zhang4Jian Wan5School of Information and Electronic Engineering, Zhejiang University of Science and TechnologySchool of Information and Electronic Engineering, Zhejiang University of Science and TechnologySchool of Information and Electronic Engineering, Zhejiang University of Science and TechnologyDepartment of Computer Science and Mathematics, Sul Ross State UniversitySchool of Information and Electronic Engineering, Zhejiang University of Science and TechnologySchool of Information and Electronic Engineering, Zhejiang University of Science and TechnologyAbstract Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach.https://doi.org/10.1007/s40747-024-01642-6Zero-shot relation extractionCross-attention networkDual contrastive learning |
spellingShingle | Diyou Li Lijuan Zhang Jie Huang Neal Xiong Lei Zhang Jian Wan Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module Complex & Intelligent Systems Zero-shot relation extraction Cross-attention network Dual contrastive learning |
title | Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module |
title_full | Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module |
title_fullStr | Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module |
title_full_unstemmed | Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module |
title_short | Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module |
title_sort | enhancing zero shot relation extraction with a dual contrastive learning framework and a cross attention module |
topic | Zero-shot relation extraction Cross-attention network Dual contrastive learning |
url | https://doi.org/10.1007/s40747-024-01642-6 |
work_keys_str_mv | AT diyouli enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule AT lijuanzhang enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule AT jiehuang enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule AT nealxiong enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule AT leizhang enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule AT jianwan enhancingzeroshotrelationextractionwithadualcontrastivelearningframeworkandacrossattentionmodule |