Unsupervised random walk manifold contrastive hashing for multimedia retrieval
Abstract With the rapid growth in both the variety and volume of data on networks, especially within social networks containing vast multimedia data such as text, images, and video, there is an urgent need for efficient methods to retrieve helpful information quickly. Due to their high computational...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-02-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01814-y |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850065646872690688 |
|---|---|
| author | Yunfei Chen Yitian Long Zhan Yang Jun Long |
| author_facet | Yunfei Chen Yitian Long Zhan Yang Jun Long |
| author_sort | Yunfei Chen |
| collection | DOAJ |
| description | Abstract With the rapid growth in both the variety and volume of data on networks, especially within social networks containing vast multimedia data such as text, images, and video, there is an urgent need for efficient methods to retrieve helpful information quickly. Due to their high computational efficiency and low storage costs, unsupervised deep cross-modal hashing methods have become the primary method for managing large-scale multimedia data. However, existing unsupervised deep cross-modal hashing methods still need help with issues such as inaccurate measurement of semantic similarity information, complex network architectures, and incomplete constraints among multimedia data. To address these issues, we propose an Unsupervised Random Walk Manifold Contrastive Hashing (URWMCH) method, designing a simple deep learning architecture. First, we build a random walk-based manifold similarity matrix based on the random walk strategy and modal-individual similarity structure. Second, we construct intra- and inter-modal similarity preservation and coexistent similarity preservation loss based on contrastive learning to constrain the training of hash functions, ensuring that the hash codes contain complete semantic association information. Finally, we designed comprehensive experiments on the MIRFlickr-25K, NUS-WIDE, and MS COCO datasets to demonstrate the effectiveness and superiority of the proposed URWMCH method. |
| format | Article |
| id | doaj-art-cdc3e07b670a41f19e9a8be7a15d734d |
| institution | DOAJ |
| issn | 2199-4536 2198-6053 |
| language | English |
| publishDate | 2025-02-01 |
| publisher | Springer |
| record_format | Article |
| series | Complex & Intelligent Systems |
| spelling | doaj-art-cdc3e07b670a41f19e9a8be7a15d734d2025-08-20T02:48:57ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-02-0111411410.1007/s40747-025-01814-yUnsupervised random walk manifold contrastive hashing for multimedia retrievalYunfei Chen0Yitian Long1Zhan Yang2Jun Long3Big Data Institute, School of Computer Science and Engineering, Central South UniversityData Science Institute, Vanderbilt UniversityBig Data Institute, School of Computer Science and Engineering, Central South UniversityBig Data Institute, School of Computer Science and Engineering, Central South UniversityAbstract With the rapid growth in both the variety and volume of data on networks, especially within social networks containing vast multimedia data such as text, images, and video, there is an urgent need for efficient methods to retrieve helpful information quickly. Due to their high computational efficiency and low storage costs, unsupervised deep cross-modal hashing methods have become the primary method for managing large-scale multimedia data. However, existing unsupervised deep cross-modal hashing methods still need help with issues such as inaccurate measurement of semantic similarity information, complex network architectures, and incomplete constraints among multimedia data. To address these issues, we propose an Unsupervised Random Walk Manifold Contrastive Hashing (URWMCH) method, designing a simple deep learning architecture. First, we build a random walk-based manifold similarity matrix based on the random walk strategy and modal-individual similarity structure. Second, we construct intra- and inter-modal similarity preservation and coexistent similarity preservation loss based on contrastive learning to constrain the training of hash functions, ensuring that the hash codes contain complete semantic association information. Finally, we designed comprehensive experiments on the MIRFlickr-25K, NUS-WIDE, and MS COCO datasets to demonstrate the effectiveness and superiority of the proposed URWMCH method.https://doi.org/10.1007/s40747-025-01814-yCross-modal hashingMultimedia dataManifold similarityIntra- and inter-modal |
| spellingShingle | Yunfei Chen Yitian Long Zhan Yang Jun Long Unsupervised random walk manifold contrastive hashing for multimedia retrieval Complex & Intelligent Systems Cross-modal hashing Multimedia data Manifold similarity Intra- and inter-modal |
| title | Unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| title_full | Unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| title_fullStr | Unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| title_full_unstemmed | Unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| title_short | Unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| title_sort | unsupervised random walk manifold contrastive hashing for multimedia retrieval |
| topic | Cross-modal hashing Multimedia data Manifold similarity Intra- and inter-modal |
| url | https://doi.org/10.1007/s40747-025-01814-y |
| work_keys_str_mv | AT yunfeichen unsupervisedrandomwalkmanifoldcontrastivehashingformultimediaretrieval AT yitianlong unsupervisedrandomwalkmanifoldcontrastivehashingformultimediaretrieval AT zhanyang unsupervisedrandomwalkmanifoldcontrastivehashingformultimediaretrieval AT junlong unsupervisedrandomwalkmanifoldcontrastivehashingformultimediaretrieval |