Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models

Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such...

Full description

Saved in:
Bibliographic Details
Main Authors: Jaewoo Yang, Hayun Kim, Junyung Ji, Younghoon Kim
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Future Internet
Subjects:
Online Access:https://www.mdpi.com/1999-5903/17/4/185
Tags: Add Tag
No Tags, Be the first to tag this record!