Efficient knowledge distillation and alignment for improved KB-VQA
Abstract Knowledge-based visual question answering (KB-VQA) often requires utilizing external knowledge to answer natural language questions about image content. Recent research has emphasized the importance of knowledge in answering questions by implicitly leveraging Large Language Models (LLMs). H...
Saved in:
| Main Authors: | Xiaofei Qin, Ruiqi Pei, Changxiang He, Fan Li, Xuedian Zhang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-07539-9 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Aligning to the teacher: multilevel feature-aligned knowledge distillation
by: Yang Zhang, et al.
Published: (2025-08-01) -
C3-VQA: Cryogenic Counter-Based Coprocessor for Variational Quantum Algorithms
by: Yosuke Ueno, et al.
Published: (2025-01-01) -
Knowledge distillation for spiking neural networks: aligning features and saliency
by: Yifan Hu, et al.
Published: (2025-01-01) -
Knowledge Distillation with Geometry-Consistent Feature Alignment for Robust Low-Light Apple Detection
by: Yuanping Shi, et al.
Published: (2025-08-01) -
Logitwise Distillation Network: Improving Knowledge Distillation via Introducing Sample Confidence
by: Teng Shen, et al.
Published: (2025-02-01)