Efficient Preference Clustering via Random Fourier Features

Approximations based on random Fourier features have recently emerged as an efficient and elegant method for designing large-scale machine learning tasks. Unlike approaches using the Nyström method, which randomly samples the training examples, we make use of random Fourier features, whose basis fun...

Full description

Saved in:
Bibliographic Details
Main Authors: Jingshu Liu, Li Wang, Jinglei Liu
Format: Article
Language:English
Published: Tsinghua University Press 2019-09-01
Series:Big Data Mining and Analytics
Subjects:
Online Access:https://www.sciopen.com/article/10.26599/BDMA.2019.9020003
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Approximations based on random Fourier features have recently emerged as an efficient and elegant method for designing large-scale machine learning tasks. Unlike approaches using the Nyström method, which randomly samples the training examples, we make use of random Fourier features, whose basis functions (i.e., cosine and sine ) are sampled from a distribution independent from the training sample set, to cluster preference data which appears extensively in recommender systems. Firstly, we propose a two-stage preference clustering framework. In this framework, we make use of random Fourier features to map the preference matrix into the feature matrix, soon afterwards, utilize the traditional k-means approach to cluster preference data in the transformed feature space. Compared with traditional preference clustering, our method solves the problem of insufficient memory and greatly improves the efficiency of the operation. Experiments on movie data sets containing 100 000 ratings, show that the proposed method is more effective in clustering accuracy than the Nyström and k-means, while also achieving better performance than these clustering approaches.
ISSN:2096-0654