Learning to represent causality in recommender systems driven by large language models (LLMs)

Abstract Current recommender systems mainly rely on correlation-based models, which limit their ability to uncover true causal relationships between user preferences and item suggestions. In this paper, we propose a hybrid model that combines a Bayesian network with a large language model (LLM) to e...

Full description

Saved in:
Bibliographic Details
Main Authors: Serge Stéphane Aman, Tiemoman Kone, Behou Gerald N’guessan, Kouadio Prosper Kimou
Format: Article
Language:English
Published: Springer 2025-08-01
Series:Discover Applied Sciences
Subjects:
Online Access:https://doi.org/10.1007/s42452-025-07551-8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Current recommender systems mainly rely on correlation-based models, which limit their ability to uncover true causal relationships between user preferences and item suggestions. In this paper, we propose a hybrid model that combines a Bayesian network with a large language model (LLM) to enhance both the relevance and interpretability of recommendations. The Bayesian network captures causal dependencies among user-item interactions, while the LLM injects contextual semantics from user reviews and product descriptions. Our method was evaluated on a dataset of 1.2 million interactions and showed significant improvements over baseline models, with gains of 84.44% in precision, 88.37% in recall, and 89.36% in NDCG. A statistical t-test confirmed the significance of these improvements (p < 0.05). We further provide an error analysis and discuss the implications of using causal modeling for scalable, transparent, and GDPR-compliant recommender systems. Our results underscore the potential of causal representation learning to improve personalization and decision-making in recommender systems.
ISSN:3004-9261