An empirical study of LLaMA3 quantization: from LLMs to MLLMs
Abstract The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2024-12-01
|
Series: | Visual Intelligence |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44267-024-00070-x |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|