Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge

Abstract Influence Maximization (IM) stands as a central challenge within the domain of complex network analysis, with the primary objective of identifying an optimal seed set of a predetermined size that maximizes the reach of influence propagation. Over time, numerous methodologies have been propo...

Full description

Saved in:
Bibliographic Details
Main Authors: Kehong You, Sanyang Liu, Yiguang Bai
Format: Article
Language:English
Published: Springer 2024-11-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01666-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571201161003008
author Kehong You
Sanyang Liu
Yiguang Bai
author_facet Kehong You
Sanyang Liu
Yiguang Bai
author_sort Kehong You
collection DOAJ
description Abstract Influence Maximization (IM) stands as a central challenge within the domain of complex network analysis, with the primary objective of identifying an optimal seed set of a predetermined size that maximizes the reach of influence propagation. Over time, numerous methodologies have been proposed to address the IM problem. However, one certain network referred to as Imbalanced Heterogeneous Networks (IHN), which widely used in social situation, urban and rural areas, and merchandising, presents challenges in achieving high-quality solutions. In this work, we introduce the Lightweight Reinforcement Learning algorithm with Prior knowledge (LRLP), which leverages the Struc2Vec graph embedding technique that captures the structural similarity of nodes to generate vector representations for nodes within the network. In details, LRLP incorporates prior knowledge based on a group of centralities, into the initial experience pool, which accelerates the reinforcement learning training for better solutions. Additionally, the node embedding vectors are input into a Deep Q Network (DQN) to commence the lightweight training process. Experimental evaluations conducted on synthetic and real networks showcase the effectiveness of the LRLP algorithm. Notably, the improvement seems to be more pronounced when the the scale of the network is larger. We also analyze the effect of different graph embedding algorithms and prior knowledge on algorithmic results. Moreover, we conduct an analysis about some parameters, such as number of seed set selections T, embedding dimension d and network update frequency C. It is significant that the reduction of number of seed set selections T not only keeps the quality of solutions, but lowers the algorithm’s computational cost.
format Article
id doaj-art-d6a50cec30de456aa537141183429e12
institution Kabale University
issn 2199-4536
2198-6053
language English
publishDate 2024-11-01
publisher Springer
record_format Article
series Complex & Intelligent Systems
spelling doaj-art-d6a50cec30de456aa537141183429e122025-02-02T12:49:45ZengSpringerComplex & Intelligent Systems2199-45362198-60532024-11-0111112010.1007/s40747-024-01666-yInfluence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledgeKehong You0Sanyang Liu1Yiguang Bai2School of Mathematics and Statistics, Xidian UniversitySchool of Mathematics and Statistics, Xidian UniversitySchool of Mathematics and Statistics, Xidian UniversityAbstract Influence Maximization (IM) stands as a central challenge within the domain of complex network analysis, with the primary objective of identifying an optimal seed set of a predetermined size that maximizes the reach of influence propagation. Over time, numerous methodologies have been proposed to address the IM problem. However, one certain network referred to as Imbalanced Heterogeneous Networks (IHN), which widely used in social situation, urban and rural areas, and merchandising, presents challenges in achieving high-quality solutions. In this work, we introduce the Lightweight Reinforcement Learning algorithm with Prior knowledge (LRLP), which leverages the Struc2Vec graph embedding technique that captures the structural similarity of nodes to generate vector representations for nodes within the network. In details, LRLP incorporates prior knowledge based on a group of centralities, into the initial experience pool, which accelerates the reinforcement learning training for better solutions. Additionally, the node embedding vectors are input into a Deep Q Network (DQN) to commence the lightweight training process. Experimental evaluations conducted on synthetic and real networks showcase the effectiveness of the LRLP algorithm. Notably, the improvement seems to be more pronounced when the the scale of the network is larger. We also analyze the effect of different graph embedding algorithms and prior knowledge on algorithmic results. Moreover, we conduct an analysis about some parameters, such as number of seed set selections T, embedding dimension d and network update frequency C. It is significant that the reduction of number of seed set selections T not only keeps the quality of solutions, but lowers the algorithm’s computational cost.https://doi.org/10.1007/s40747-024-01666-yInfluence maximizationImbalanced heterogeneous networksGraph embeddingPrior knowledgeDeep reinforcement learning
spellingShingle Kehong You
Sanyang Liu
Yiguang Bai
Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
Complex & Intelligent Systems
Influence maximization
Imbalanced heterogeneous networks
Graph embedding
Prior knowledge
Deep reinforcement learning
title Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
title_full Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
title_fullStr Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
title_full_unstemmed Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
title_short Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
title_sort influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
topic Influence maximization
Imbalanced heterogeneous networks
Graph embedding
Prior knowledge
Deep reinforcement learning
url https://doi.org/10.1007/s40747-024-01666-y
work_keys_str_mv AT kehongyou influencemaximizationunderimbalancedheterogeneousnetworksvialightweightreinforcementlearningwithpriorknowledge
AT sanyangliu influencemaximizationunderimbalancedheterogeneousnetworksvialightweightreinforcementlearningwithpriorknowledge
AT yiguangbai influencemaximizationunderimbalancedheterogeneousnetworksvialightweightreinforcementlearningwithpriorknowledge