A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction
With the popularity of location-based social networks, location prediction has become an important task and has gained significant attention in recent years. However, how to use massive trajectory data and spatial-temporal context information effectively to mine the user’s mobility pattern and predi...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2021/6692313 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832560131784572928 |
---|---|
author | Shuang Wang AnLiang Li Shuai Xie WenZhu Li BoWei Wang Shuai Yao Muhammad Asif |
author_facet | Shuang Wang AnLiang Li Shuai Xie WenZhu Li BoWei Wang Shuai Yao Muhammad Asif |
author_sort | Shuang Wang |
collection | DOAJ |
description | With the popularity of location-based social networks, location prediction has become an important task and has gained significant attention in recent years. However, how to use massive trajectory data and spatial-temporal context information effectively to mine the user’s mobility pattern and predict the users’ next location is still unresolved. In this paper, we propose a novel network named STSAN (spatial-temporal self-attention network), which can integrate spatial-temporal information with the self-attention for location prediction. In STSAN, we design a trajectory attention module to learn users’ dynamic trajectory representation, which includes three modules: location attention, which captures the location sequential transitions with self-attention; spatial attention, which captures user’s preference for geographic location; and temporal attention, which captures the user temporal activity preference. Finally, extensive experiments on four real-world check-ins datasets are designed to verify the effectiveness of our proposed method. Experimental results show that spatial-temporal information can effectively improve the performance of the model. Our method STSAN gains about 39.8% Acc@1 and 4.4% APR improvements against the strongest baseline on New York City dataset. |
format | Article |
id | doaj-art-aaa83634ff65488a80630a0536ea6185 |
institution | Kabale University |
issn | 1076-2787 1099-0526 |
language | English |
publishDate | 2021-01-01 |
publisher | Wiley |
record_format | Article |
series | Complexity |
spelling | doaj-art-aaa83634ff65488a80630a0536ea61852025-02-03T01:28:23ZengWileyComplexity1076-27871099-05262021-01-01202110.1155/2021/66923136692313A Spatial-Temporal Self-Attention Network (STSAN) for Location PredictionShuang Wang0AnLiang Li1Shuai Xie2WenZhu Li3BoWei Wang4Shuai Yao5Muhammad Asif6School of Software, Northeastern University, Shenyang 110000, ChinaSchool of Software, Northeastern University, Shenyang 110000, ChinaSchool of Software, Northeastern University, Shenyang 110000, ChinaSchool of Software, Northeastern University, Shenyang 110000, ChinaSchool of Software, Northeastern University, Shenyang 110000, ChinaSchool of Software, Northeastern University, Shenyang 110000, ChinaDepartment of Computer Science, Ekha Ghund Degree College Mohmand, Peshawar, KpK 24650, PakistanWith the popularity of location-based social networks, location prediction has become an important task and has gained significant attention in recent years. However, how to use massive trajectory data and spatial-temporal context information effectively to mine the user’s mobility pattern and predict the users’ next location is still unresolved. In this paper, we propose a novel network named STSAN (spatial-temporal self-attention network), which can integrate spatial-temporal information with the self-attention for location prediction. In STSAN, we design a trajectory attention module to learn users’ dynamic trajectory representation, which includes three modules: location attention, which captures the location sequential transitions with self-attention; spatial attention, which captures user’s preference for geographic location; and temporal attention, which captures the user temporal activity preference. Finally, extensive experiments on four real-world check-ins datasets are designed to verify the effectiveness of our proposed method. Experimental results show that spatial-temporal information can effectively improve the performance of the model. Our method STSAN gains about 39.8% Acc@1 and 4.4% APR improvements against the strongest baseline on New York City dataset.http://dx.doi.org/10.1155/2021/6692313 |
spellingShingle | Shuang Wang AnLiang Li Shuai Xie WenZhu Li BoWei Wang Shuai Yao Muhammad Asif A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction Complexity |
title | A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction |
title_full | A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction |
title_fullStr | A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction |
title_full_unstemmed | A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction |
title_short | A Spatial-Temporal Self-Attention Network (STSAN) for Location Prediction |
title_sort | spatial temporal self attention network stsan for location prediction |
url | http://dx.doi.org/10.1155/2021/6692313 |
work_keys_str_mv | AT shuangwang aspatialtemporalselfattentionnetworkstsanforlocationprediction AT anliangli aspatialtemporalselfattentionnetworkstsanforlocationprediction AT shuaixie aspatialtemporalselfattentionnetworkstsanforlocationprediction AT wenzhuli aspatialtemporalselfattentionnetworkstsanforlocationprediction AT boweiwang aspatialtemporalselfattentionnetworkstsanforlocationprediction AT shuaiyao aspatialtemporalselfattentionnetworkstsanforlocationprediction AT muhammadasif aspatialtemporalselfattentionnetworkstsanforlocationprediction AT shuangwang spatialtemporalselfattentionnetworkstsanforlocationprediction AT anliangli spatialtemporalselfattentionnetworkstsanforlocationprediction AT shuaixie spatialtemporalselfattentionnetworkstsanforlocationprediction AT wenzhuli spatialtemporalselfattentionnetworkstsanforlocationprediction AT boweiwang spatialtemporalselfattentionnetworkstsanforlocationprediction AT shuaiyao spatialtemporalselfattentionnetworkstsanforlocationprediction AT muhammadasif spatialtemporalselfattentionnetworkstsanforlocationprediction |