Tailorable codes for lattice-based KEMs with applications to compact ML-KEM instantiations
Compared to elliptic curve cryptography, a primary drawback of latticebased schemes is the larger size of their public keys and ciphertexts. A common procedure for compressing these objects consists essentially of dropping some of their least significant bits. Albeit effective for compression, ther...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Ruhr-Universität Bochum
2025-06-01
|
| Series: | Transactions on Cryptographic Hardware and Embedded Systems |
| Subjects: | |
| Online Access: | https://tches.iacr.org/index.php/TCHES/article/view/12213 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Compared to elliptic curve cryptography, a primary drawback of latticebased schemes is the larger size of their public keys and ciphertexts. A common procedure for compressing these objects consists essentially of dropping some of their least significant bits. Albeit effective for compression, there is a limit to the number of bits to be dropped before we get a noticeable decryption failure rate (DFR), which is a security concern. To address this issue, this paper presents a family of error-correction codes that, by allowing an increased number of dropped bits while preserving a negligible DFR, can be used for both ciphertext and publickey compression in modern lattice-based schemes. To showcase the impact and practicality of our proposal, we use the highly optimized ML-KEM, a post-quantum lattice-based scheme recently standardized by NIST. We provide detailed procedures for tailoring our codes to ML-KEM’s specific noise distributions, and show how to analyze the DFR without independence assumptions on the noise coefficients. Among our results, we achieve between 4% and 8% ciphertext compression for MLKEM. Alternatively, we obtain 8% shorter public keys compared to the current standard. We also present isochronous implementations of the decoding procedure, achieving negligible performance impact in the full ML-KEM decapsulation even when considering optimized implementations for AVX2, Cortex-M4, and Cortex-A53.
|
|---|---|
| ISSN: | 2569-2925 |