33
/
AIzaSyAYiBZKx7MnpbEhh9jyipgxe19OcubqV5w
May 1, 2025
Public Timelines
Menu
Public Timelines
FAQ
Public Timelines
FAQ
For education
For educational institutions
For teachers
For students
Cabinet
For educational institutions
For teachers
For students
Open cabinet
Create
Close
Create a timeline
Public timelines
Library
FAQ
Download
Export
Duplicate
Premium
Embed
Share
Quantization timeline
Category:
Other
Updated:
29 Mar 2020
The quantization development history
1
0
700
Contributors
Created by
龚成
Attachments
Comments
Pruning
By
龚成
30 Aug 2019
1
0
546
knowledge transfer/ distillation
By
龚成
30 Aug 2019
0
0
449
decomposation
By
龚成
30 Aug 2019
0
0
389
New timeline
By
龚成
22 Aug 2019
0
0
348
Pruning Entropy
By
龚成
12 Nov 2019
0
0
310
New timeline
By
龚成
22 Jan 2020
0
0
239
Events
Han Song: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, 2015
M Courbariaux: Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1 (BNNs)
Ternary Weight Networks (TWNs)
Trained Ternary Quantization (TTQ)
Training and Inference with Integers in Deep Neural Networks
HAQ: Hardware-Aware Automated Quantization With Mixed Precision, CVPR, 2019
Extremely Low Bit Neural Network Squeeze the Last Bit Out with ADMM, AAAI, 2018
Dorefa-Net: training low-bitwidth CNN with low-bitwidth gradients
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning- Quantization (CVPR 2018)
Ternary Neural Networks for Resource-Efficient AI Applications (TNNs)
Philipp Gysel: Hardware-oriented Approximation of Convolutional Neural Networks
Canran Jin: Sparse Ternary Connect: Convolutional Neural Networks Using Ternarized Weights with Enhanced Sparsity (STC)
ChengGong: $\mu$L2Q: An Ultra-Low Loss Quantization Method for DNN
Bichen Wu: Mixed precision quantization of convnets via differentiable neural architecture search, 2018
Lei Deng: GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework, NN, 2018
Rastegari: XNOR-Net: Imagenet classification using binary convolutional neural networks, ECCV, 2016
Deep Learning with Low Precision by Half-wave Gaussian Quantization (HWGQ)
Deep Neural Network Compression with Single and Multiple Level Quantization
Two-Step Quantization for Low-bit Neural Networks, CVPR, 2018
Incremental Network Quantization: Towards Lossless CNNs with Low- Precision Weights, INQ, 2017
Yao Chen: T-DLA: An Open-source Deep Learning Accelerator for Ternarized DNN Models on Embedded FPGA, 2019
Daisuke Miyashita: Convolutional Neural Networks using Logarithmic Data Representation
Forward and Backward Information Retention for Accurate Binary Neural Networks, CVPR 2020
Retrain-Less Weight Quantization for Multiplier-Less Convolutional Neural Networks
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting, arxiv
“Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS
Neural networks with few multiplications
QAT: Quantization and training of neural networks for efficient integer-arithmetic-only inference, CVPR
About & Feedback
Terms
Privacy
Library
FAQ
Support 24/7
Cabinet
Get premium
Donate
The service accepts bank transfer (ACH, Wire) or cards (Visa, MasterCard, etc). Processed by Stripe.
Secured with SSL
Comments