FPGA-QNN: Quantized Neural Network Hardware Acceleration on FPGAs
Recently, convolutional neural networks (CNNs) have received a massive amount of interest due to their ability to achieve high accuracy in various artificial intelligence tasks. With the development of complex CNN models, a significant drawback is their high computational burden and memory requireme...
Saved in:
Main Authors: | Mustafa Tasci, Ayhan Istanbullu, Vedat Tumen, Selahattin Kosunalp |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/2/688 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Hardware Accelerator for the Inference of a Convolutional Neural network
by: Edwin González, et al.
Published: (2019-11-01) -
A Pipelined Hardware Design of FNTT and INTT of CRYSTALS-Kyber PQC Algorithm
by: Muhammad Rashid, et al.
Published: (2024-12-01) -
Low-latency hierarchical routing of reconfigurable neuromorphic systems
by: Samalika Perera, et al.
Published: (2025-02-01) -
Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
by: Ali Mehrabi, et al.
Published: (2024-01-01) -
Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection
by: Jia Xu, et al.
Published: (2024-12-01)