Suggested Topics within your search.
Suggested Topics within your search.
-
61
Tagging Efficiency of 99mTc-SC Radiolabeled Alternative Gastric Emptying Meals: A Quantitative Study
Published 2022-09-01Get full text
Article -
62
Quantitative Comparison of Age‐Related Development of Oral Functions During Growing Age
Published 2024-12-01Get full text
Article -
63
Quantized Feedback Control of Active Suspension Systems Based on Event Trigger
Published 2021-01-01“…In addition, the trigger mechanism can improve the working efficiency of actuators effectively.…”
Get full text
Article -
64
Optimizing binary neural network quantization for fixed pattern noise robustness
Published 2025-07-01“…Abstract This work presents a comprehensive analysis of how extreme data quantization and fixed pattern noise (FPN) from CMOS imagers affect the performance of deep neural networks for image recognition tasks. …”
Get full text
Article -
65
Efficient gingival health screening using biofluorescence of anterior dental biofilms
Published 2025-06-01Get full text
Article -
66
-
67
Lost-minimum post-training parameter quantization method for convolutional neural network
Published 2022-04-01“…To solve the problem that that no dataset is available for model quantization in data-sensitive scenarios, a model quantization method without using data sets was proposed.Firstly, according to the parameters of batch normalized layer and the distribution characteristics of image data, the simulated input data was obtained by error minimization method.Then, by studying the characteristics of data rounding, a factor dynamic rounding method based on loss minimization was proposed.Through quantitative experiments on classification models such as GhostNet and target detection models such as M2Det, the effectiveness of the proposed quantification method for image classification and target detection models was verified.The experimental results show that the proposed quantization method can reduce the model size by about 75%, effectively reduce the power loss and improve the computing efficiency while basically maintaining the accuracy of the original model.…”
Get full text
Article -
68
Decentralized non-convex online optimization with adaptive momentum estimation and quantized communication
Published 2025-03-01“…To solve the problem over a communication-efficient manner, we propose a novel quantized decentralized adaptive momentum gradient descent algorithm based on the adaptive momentum estimation methods, where quantified information is exchanged between agents. …”
Get full text
Article -
69
Convolution Smooth: A Post-Training Quantization Method for Convolutional Neural Networks
Published 2025-01-01“…Convolutional neural network (CNN) quantization is an efficient model compression technique primarily used for accelerating inference and optimizing resources. …”
Get full text
Article -
70
Quantitative Study of Spodumene by Time-of-Flight Secondary Ion Mass Spectrometry (tof-SIMS)
Published 2025-03-01“…In this paper, the quantitative feasibility of time-of-flight secondary ion mass spectrometry (tof-SIMS) for major and minor elements in spodumene was evaluated in terms of calibration method with a matrix-matched or non-matrix-matched standard and an internal standard element using Al or Si. …”
Get full text
Article -
71
AFQSeg: An Adaptive Feature Quantization Network for Instance-Level Surface Crack Segmentation
Published 2025-05-01“…To address these issues, this paper proposes a crack detection model based on adaptive feature quantization, which primarily consists of a maximum soft pooling module, an adaptive crack feature quantization module, and a trainable crack post-processing module. …”
Get full text
Article -
72
Hierarchical Mixed-Precision Post-Training Quantization for SAR Ship Detection Networks
Published 2024-10-01“…However, limited satellite platform resources present a significant challenge. Post-training quantization (PTQ) provides an efficient method for pre-training neural networks to effectively reduce memory and computational resources without retraining. …”
Get full text
Article -
73
NeuBridge: bridging quantized activations and spiking neurons for ANN-SNN conversion
Published 2025-01-01“…Spiking neural networks (SNNs) offer a promising avenue for energy-efficient computations on neuromorphic hardware, leveraging the unique advantages of spike-based signaling. …”
Get full text
Article -
74
Randomized Quantization for Privacy in Resource Constrained Machine Learning at-the-Edge and Federated Learning
Published 2025-01-01“…Through rigorous theoretical analysis and extensive experiments on benchmark datasets, we demonstrate that these methods significantly enhance the utility-privacy trade-off and computational efficiency in both ML-at-the-edge and FL systems. RQP-SGD is evaluated on MNIST and the Breast Cancer Diagnostic dataset, showing an average 10.62% utility improvement over the deterministic quantization-based projected DP-SGD while maintaining (1.0, 0)-DP. …”
Get full text
Article -
75
Enriched HARQ Feedback for Link Adaptation in 6G: Optimizing Uplink Overhead for Enhanced Downlink Spectral Efficiency
Published 2025-01-01“…First, our learning-driven adaptive quantization (LAQ) employs a-priori statistics to refine delta MCS quantization within fixed-size UE feedback. …”
Get full text
Article -
76
-
77
-
78
-
79
FL-QNNs: Memory Efficient and Privacy Preserving Framework for Peripheral Blood Cell Classification
Published 2025-01-01“…This study proposes a resource efficient, privacy preserving, optimized memory framework by incorporating two approaches: Federated learning and quantized neural network (FL-QNNs) for peripheral blood cell (PBC) image classification. …”
Get full text
Article -
80
Dynamic Integration of Shading and Ventilation: Novel Quantitative Insights into Building Performance Optimization
Published 2025-03-01“…Buildings consume nearly 40% of global energy, necessitating innovative strategies to balance energy efficiency and occupant comfort. While shading and ventilation are critical to sustainable design, they are often studied independently, leaving gaps in understanding their combined potential. …”
Get full text
Article