WAPS-Quant: Low-Bit Post-Training Quantization Using Weight-Activation Product Scaling
Post-Training Quantization (PTQ) has been effectively compressing neural networks into very few bits using a limited calibration dataset. Various quantization methods utilizing second-order error have been proposed and demonstrated good performance. However, at extremely low bits, the increase in qu...
Saved in:
| Main Authors: | Geunjae Choi, Kamin Lee, Nojun Kwak |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10982219/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
by: Aozhong Zhang, et al.
Published: (2025-01-01) -
One-bit maximum likelihood algorithm for sensor networks in non-ideal channels
by: Nana WANG, et al.
Published: (2020-03-01) -
Source Quantization and Coding over Noisy Channel Analysis
by: Runfeng Wang, et al.
Published: (2024-11-01) -
Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models
by: Jaewoo Yang, et al.
Published: (2025-04-01) -
Quantization for a Condensation System
by: Shivam Dubey, et al.
Published: (2025-04-01)