Ensemble graph auto-encoders for clustering and link prediction
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for eac...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
PeerJ Inc.
2025-01-01
|
Series: | PeerJ Computer Science |
Subjects: | |
Online Access: | https://peerj.com/articles/cs-2648.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encoders (E-GAE) model. It utilizes the ensemble random walk graph auto-encoder, the random walk graph auto-encoder of the ensemble network, and the graph attention auto-encoder to generate three node embedding matrices Z. Then, these techniques are combined using adaptive weights to reconstruct a new node embedding matrix. This method addresses the problem of low-quality embeddings. The model’s performance is evaluated using three publicly available datasets (Cora, Citeseer, and PubMed), indicating its effectiveness through multiple experiments. It achieves up to a 2.0% improvement in the link prediction task and a 9.4% enhancement in the clustering task. Our code for this work can be found at https://github.com/xcgydfjjjderg/graphautoencoder. |
---|---|
ISSN: | 2376-5992 |