Mixture of prompts learning for vision-language models
As powerful pre-trained vision-language models (VLMs) like CLIP gain prominence, numerous studies have attempted to combine VLMs for downstream tasks. Among these, prompt learning has been validated as an effective method for adapting to new tasks, which only requires a small number of parameters. H...
Saved in:
| Main Authors: | Yu Du, Tong Niu, Rong Zhao |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-06-01
|
| Series: | Frontiers in Artificial Intelligence |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2025.1580973/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Fine-tuning or prompting on LLMs: evaluating knowledge graph construction task
by: Hussam Ghanem, et al.
Published: (2025-06-01) -
FireCLIP: Enhancing Forest Fire Detection with Multimodal Prompt Tuning and Vision-Language Understanding
by: Shanjunxia Wu, et al.
Published: (2025-06-01) -
Leveraging Local LLMs for Secure In-System Task Automation With Prompt-Based Agent Classification
by: Suthir Sriram, et al.
Published: (2024-01-01) -
An Adapted Few-Shot Prompting Technique Using ChatGPT to Advance Low-Resource Languages Understanding
by: Saedeh Tahery, et al.
Published: (2025-01-01) -
Named Entity Recognition Based on Multi-Class Label Prompt Selection and Core Entity Replacement
by: Di Wu, et al.
Published: (2025-05-01)