Search alternatives:
quantization » quantitative (Expand Search)
efficient » efficiency (Expand Search)
quantization » quantitative (Expand Search)
efficient » efficiency (Expand Search)
-
41
Quantized Feedback Control of Active Suspension Systems Based on Event Trigger
Published 2021-01-01“…In addition, the trigger mechanism can improve the working efficiency of actuators effectively.…”
Get full text
Article -
42
-
43
Decentralized non-convex online optimization with adaptive momentum estimation and quantized communication
Published 2025-03-01“…To solve the problem over a communication-efficient manner, we propose a novel quantized decentralized adaptive momentum gradient descent algorithm based on the adaptive momentum estimation methods, where quantified information is exchanged between agents. …”
Get full text
Article -
44
Convolution Smooth: A Post-Training Quantization Method for Convolutional Neural Networks
Published 2025-01-01“…Convolutional neural network (CNN) quantization is an efficient model compression technique primarily used for accelerating inference and optimizing resources. …”
Get full text
Article -
45
AFQSeg: An Adaptive Feature Quantization Network for Instance-Level Surface Crack Segmentation
Published 2025-05-01“…To address these issues, this paper proposes a crack detection model based on adaptive feature quantization, which primarily consists of a maximum soft pooling module, an adaptive crack feature quantization module, and a trainable crack post-processing module. …”
Get full text
Article -
46
Hierarchical Mixed-Precision Post-Training Quantization for SAR Ship Detection Networks
Published 2024-10-01“…However, limited satellite platform resources present a significant challenge. Post-training quantization (PTQ) provides an efficient method for pre-training neural networks to effectively reduce memory and computational resources without retraining. …”
Get full text
Article -
47
NeuBridge: bridging quantized activations and spiking neurons for ANN-SNN conversion
Published 2025-01-01“…Spiking neural networks (SNNs) offer a promising avenue for energy-efficient computations on neuromorphic hardware, leveraging the unique advantages of spike-based signaling. …”
Get full text
Article -
48
Randomized Quantization for Privacy in Resource Constrained Machine Learning at-the-Edge and Federated Learning
Published 2025-01-01“…Through rigorous theoretical analysis and extensive experiments on benchmark datasets, we demonstrate that these methods significantly enhance the utility-privacy trade-off and computational efficiency in both ML-at-the-edge and FL systems. RQP-SGD is evaluated on MNIST and the Breast Cancer Diagnostic dataset, showing an average 10.62% utility improvement over the deterministic quantization-based projected DP-SGD while maintaining (1.0, 0)-DP. …”
Get full text
Article -
49
Lost-minimum post-training parameter quantization method for convolutional neural network
Published 2022-04-01“…To solve the problem that that no dataset is available for model quantization in data-sensitive scenarios, a model quantization method without using data sets was proposed.Firstly, according to the parameters of batch normalized layer and the distribution characteristics of image data, the simulated input data was obtained by error minimization method.Then, by studying the characteristics of data rounding, a factor dynamic rounding method based on loss minimization was proposed.Through quantitative experiments on classification models such as GhostNet and target detection models such as M2Det, the effectiveness of the proposed quantification method for image classification and target detection models was verified.The experimental results show that the proposed quantization method can reduce the model size by about 75%, effectively reduce the power loss and improve the computing efficiency while basically maintaining the accuracy of the original model.…”
Get full text
Article -
50
Enriched HARQ Feedback for Link Adaptation in 6G: Optimizing Uplink Overhead for Enhanced Downlink Spectral Efficiency
Published 2025-01-01“…First, our learning-driven adaptive quantization (LAQ) employs a-priori statistics to refine delta MCS quantization within fixed-size UE feedback. …”
Get full text
Article -
51
FL-QNNs: Memory Efficient and Privacy Preserving Framework for Peripheral Blood Cell Classification
Published 2025-01-01“…This study proposes a resource efficient, privacy preserving, optimized memory framework by incorporating two approaches: Federated learning and quantized neural network (FL-QNNs) for peripheral blood cell (PBC) image classification. …”
Get full text
Article -
52
Optimizing Deep Learning Models for Resource‐Constrained Environments With Cluster‐Quantized Knowledge Distillation
Published 2025-05-01“…To address these issues, we propose Cluster‐Quantized Knowledge Distillation (CQKD), a novel framework that integrates structured pruning with knowledge distillation, incorporating cluster‐based weight quantization directly into the training loop. …”
Get full text
Article -
53
Study of algorithmic approaches to digital signal filtering and the influence of input quantization on output accuracy
Published 2025-01-01“…The research supports the broader integration of AI-driven technologies in modern automation systems, paving the way for more adaptive, efficient, and fault-tolerant control mechanisms in complex environments.…”
Get full text
Article -
54
Fully Quantized Matrix Arithmetic-Only BERT Model and Its FPGA-Based Accelerator
Published 2025-01-01“…In this paper, we propose a fully quantized matrix arithmetic-only BERT (FQ MA-BERT) model to enable efficient natural language processing. …”
Get full text
Article -
55
Research of channel quantization and feedback strategies based on multiuser diversity MIMO-OFDM systems
Published 2009-01-01“…Firstly, a quantization method was proposed by quantized value indicating the modulation level instead of the full values of channel quality information(CQI) and the achievable average spectrum efficiency showed no loss compared with perfect case.Secondly, employment of the integrated design that combined with opportunistic, best, and hybrid feedback scheme was considered and the close-form expression of average spectrum efficiency was deduced in various case.Finally, the calculation of optimal feedback parameters was confirmed from two aspects of feedback channel capacity and capacity relative loss.Extensive simulations were presented to evaluate these proposed strategies.The results match with the numeral analysis very well.The proposed partial feedback schemes can reduce the feedback load greatly with the same system capability, only if the feedback parameters be chosen properly.Wherein, the hybrid feedback combined with quantization performs best and provides the instruction to design the channel feedback of practical systems.…”
Get full text
Article -
56
Smoothed per-tensor weight quantization: a robust solution for neural network deployment
Published 2025-07-01“…This paper introduces a novel method to improve quantization outcomes for per-tensor weight quantization, focusing on enhancing computational efficiency and compatibility with resource-constrained hardware. …”
Get full text
Article -
57
Reducing Memory and Computational Cost for Deep Neural Network Training with Quantized Parameter Updates
Published 2025-08-01“…For embedded devices, both memory and computational efficiency are essential due to their constrained resources. …”
Get full text
Article -
58
Generation of Phase-Only Fourier Hologram Based on Double Phase Method and Quantization Error Analysis
Published 2020-01-01“…The double phase method is an efficient way to generate phase-only holograms with high reconstruction quality due to no addition of a random phase. …”
Get full text
Article -
59
Enhanced Vector Quantization for Embedded Machine Learning: A Post-Training Approach With Incremental Clustering
Published 2025-01-01“…This study introduces a novel method to optimize Post-Training Quantization (PTQ), a widely used technique for reducing model size, by integrating Vector Quantization (VQ) with incremental clustering. …”
Get full text
Article -
60
Optimising TinyML with quantization and distillation of transformer and mamba models for indoor localisation on edge devices
Published 2025-03-01“…Abstract This paper proposes small and efficient machine learning models (TinyML) for resource-constrained edge devices, specifically for on-device indoor localisation. …”
Get full text
Article