Joint embedding–classifier learning for interpretable collaborative filtering
Abstract Background Interpretability is a topical question in recommender systems, especially in healthcare applications. An interpretable classifier quantifies the importance of each input feature for the predicted item-user association in a non-ambiguous fashion. Results We introduce the novel Joi...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2025-01-01
|
Series: | BMC Bioinformatics |
Subjects: | |
Online Access: | https://doi.org/10.1186/s12859-024-06026-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832585340152446976 |
---|---|
author | Clémence Réda Jill-Jênn Vie Olaf Wolkenhauer |
author_facet | Clémence Réda Jill-Jênn Vie Olaf Wolkenhauer |
author_sort | Clémence Réda |
collection | DOAJ |
description | Abstract Background Interpretability is a topical question in recommender systems, especially in healthcare applications. An interpretable classifier quantifies the importance of each input feature for the predicted item-user association in a non-ambiguous fashion. Results We introduce the novel Joint Embedding Learning-classifier for improved Interpretability (JELI). By combining the training of a structured collaborative-filtering classifier and an embedding learning task, JELI predicts new user-item associations based on jointly learned item and user embeddings while providing feature-wise importance scores. Therefore, JELI flexibly allows the introduction of priors on the connections between users, items, and features. In particular, JELI simultaneously (a) learns feature, item, and user embeddings; (b) predicts new item-user associations; (c) provides importance scores for each feature. Moreover, JELI instantiates a generic approach to training recommender systems by encoding generic graph-regularization constraints. Conclusions First, we show that the joint training approach yields a gain in the predictive power of the downstream classifier. Second, JELI can recover feature-association dependencies. Finally, JELI induces a restriction in the number of parameters compared to baselines in synthetic and drug-repurposing data sets. |
format | Article |
id | doaj-art-242ed8a415c442199c7427bef25431f8 |
institution | Kabale University |
issn | 1471-2105 |
language | English |
publishDate | 2025-01-01 |
publisher | BMC |
record_format | Article |
series | BMC Bioinformatics |
spelling | doaj-art-242ed8a415c442199c7427bef25431f82025-01-26T12:54:52ZengBMCBMC Bioinformatics1471-21052025-01-0126113410.1186/s12859-024-06026-8Joint embedding–classifier learning for interpretable collaborative filteringClémence Réda0Jill-Jênn Vie1Olaf Wolkenhauer2Institute of Computer Science, University of RostockSoda, Inria SaclayInstitute of Computer Science, University of RostockAbstract Background Interpretability is a topical question in recommender systems, especially in healthcare applications. An interpretable classifier quantifies the importance of each input feature for the predicted item-user association in a non-ambiguous fashion. Results We introduce the novel Joint Embedding Learning-classifier for improved Interpretability (JELI). By combining the training of a structured collaborative-filtering classifier and an embedding learning task, JELI predicts new user-item associations based on jointly learned item and user embeddings while providing feature-wise importance scores. Therefore, JELI flexibly allows the introduction of priors on the connections between users, items, and features. In particular, JELI simultaneously (a) learns feature, item, and user embeddings; (b) predicts new item-user associations; (c) provides importance scores for each feature. Moreover, JELI instantiates a generic approach to training recommender systems by encoding generic graph-regularization constraints. Conclusions First, we show that the joint training approach yields a gain in the predictive power of the downstream classifier. Second, JELI can recover feature-association dependencies. Finally, JELI induces a restriction in the number of parameters compared to baselines in synthetic and drug-repurposing data sets.https://doi.org/10.1186/s12859-024-06026-8Drug repurposingInterpretabilityGene expressionCollaborative filtering |
spellingShingle | Clémence Réda Jill-Jênn Vie Olaf Wolkenhauer Joint embedding–classifier learning for interpretable collaborative filtering BMC Bioinformatics Drug repurposing Interpretability Gene expression Collaborative filtering |
title | Joint embedding–classifier learning for interpretable collaborative filtering |
title_full | Joint embedding–classifier learning for interpretable collaborative filtering |
title_fullStr | Joint embedding–classifier learning for interpretable collaborative filtering |
title_full_unstemmed | Joint embedding–classifier learning for interpretable collaborative filtering |
title_short | Joint embedding–classifier learning for interpretable collaborative filtering |
title_sort | joint embedding classifier learning for interpretable collaborative filtering |
topic | Drug repurposing Interpretability Gene expression Collaborative filtering |
url | https://doi.org/10.1186/s12859-024-06026-8 |
work_keys_str_mv | AT clemencereda jointembeddingclassifierlearningforinterpretablecollaborativefiltering AT jilljennvie jointembeddingclassifierlearningforinterpretablecollaborativefiltering AT olafwolkenhauer jointembeddingclassifierlearningforinterpretablecollaborativefiltering |