CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs

Knowledge graphs (KGs) are one of the most widely used techniques of knowledge organizations and have been extensively used in many application fields related to artificial intelligence, for example, web search and recommendations. Entity alignment provides a useful tool for how to integrate multili...

Full description

Saved in:
Bibliographic Details
Main Authors: Baiyang Chen, Xiaoliang Chen, Peng Lu, Yajun Du
Format: Article
Language:English
Published: Wiley 2020-01-01
Series:Discrete Dynamics in Nature and Society
Online Access:http://dx.doi.org/10.1155/2020/6831603
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832550822021431296
author Baiyang Chen
Xiaoliang Chen
Peng Lu
Yajun Du
author_facet Baiyang Chen
Xiaoliang Chen
Peng Lu
Yajun Du
author_sort Baiyang Chen
collection DOAJ
description Knowledge graphs (KGs) are one of the most widely used techniques of knowledge organizations and have been extensively used in many application fields related to artificial intelligence, for example, web search and recommendations. Entity alignment provides a useful tool for how to integrate multilingual KGs automatically. However, most of the existing studies evaluated ignore the abundant information of entity attributes except for entity relationships. This paper sets out to investigate cross-lingual entity alignment and proposes an iterative cotraining approach (CAREA) to train a pair of independent models. The two models can extract the attribute and the relation features of multilingual KGs, respectively. In each iteration, the two models alternate to predict a new set of potentially aligned entity pairs. Besides, this method further filters through the dynamic threshold value to enhance the two models’ supervision. Experimental results on three real-world datasets demonstrate the effectiveness and superiority of the proposed method. The CAREA model improves the performance with at least an absolute increase of 3.9% across all experiment datasets. The code is available at https://github.com/ChenBaiyang/CAREA.
format Article
id doaj-art-2d24828d67b6491a9f65416d9d97d870
institution Kabale University
issn 1026-0226
1607-887X
language English
publishDate 2020-01-01
publisher Wiley
record_format Article
series Discrete Dynamics in Nature and Society
spelling doaj-art-2d24828d67b6491a9f65416d9d97d8702025-02-03T06:05:39ZengWileyDiscrete Dynamics in Nature and Society1026-02261607-887X2020-01-01202010.1155/2020/68316036831603CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge GraphsBaiyang Chen0Xiaoliang Chen1Peng Lu2Yajun Du3School of Computer and Software Engineering, Xihua University, Chengdu 610039, ChinaSchool of Computer and Software Engineering, Xihua University, Chengdu 610039, ChinaDepartment of Computer Science and Operations Research, University of Montreal, Montreal, QC H3C3J7, CanadaSchool of Computer and Software Engineering, Xihua University, Chengdu 610039, ChinaKnowledge graphs (KGs) are one of the most widely used techniques of knowledge organizations and have been extensively used in many application fields related to artificial intelligence, for example, web search and recommendations. Entity alignment provides a useful tool for how to integrate multilingual KGs automatically. However, most of the existing studies evaluated ignore the abundant information of entity attributes except for entity relationships. This paper sets out to investigate cross-lingual entity alignment and proposes an iterative cotraining approach (CAREA) to train a pair of independent models. The two models can extract the attribute and the relation features of multilingual KGs, respectively. In each iteration, the two models alternate to predict a new set of potentially aligned entity pairs. Besides, this method further filters through the dynamic threshold value to enhance the two models’ supervision. Experimental results on three real-world datasets demonstrate the effectiveness and superiority of the proposed method. The CAREA model improves the performance with at least an absolute increase of 3.9% across all experiment datasets. The code is available at https://github.com/ChenBaiyang/CAREA.http://dx.doi.org/10.1155/2020/6831603
spellingShingle Baiyang Chen
Xiaoliang Chen
Peng Lu
Yajun Du
CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
Discrete Dynamics in Nature and Society
title CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
title_full CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
title_fullStr CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
title_full_unstemmed CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
title_short CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
title_sort carea cotraining attribute and relation embeddings for cross lingual entity alignment in knowledge graphs
url http://dx.doi.org/10.1155/2020/6831603
work_keys_str_mv AT baiyangchen careacotrainingattributeandrelationembeddingsforcrosslingualentityalignmentinknowledgegraphs
AT xiaoliangchen careacotrainingattributeandrelationembeddingsforcrosslingualentityalignmentinknowledgegraphs
AT penglu careacotrainingattributeandrelationembeddingsforcrosslingualentityalignmentinknowledgegraphs
AT yajundu careacotrainingattributeandrelationembeddingsforcrosslingualentityalignmentinknowledgegraphs