Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
aidge_quantization
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Eclipse Projects
aidge
aidge_quantization
Merge requests
!2
Draft: Dev ptq
Code
Review changes
Check out branch
Download
Patches
Plain diff
Closed
Draft: Dev ptq
DevPTQ
into
master
Overview
6
Commits
11
Pipelines
0
Changes
2
All threads resolved!
Hide all comments
Closed
Cyril Moineau
requested to merge
DevPTQ
into
master
1 year ago
Overview
6
Commits
11
Pipelines
0
Changes
2
All threads resolved!
Hide all comments
Expand
TODO
Normalize layers
Get weight max value for each layer (Fc/Conv)-> alpha
Divide weights by alpha
Divide bias by alpha
Add PerOutputChannel normalization
TODO : add steps to do this ...
Normalize activation
Normalize stimuli between [0;1]
or [-1,1]
Forward on validation dataset
Get max value of activation for each layer -> beta
Develop hook sytsem
Develop get max activation hook
Add scaling factor beta to the activation
Add scaling factor beta to the bias
Rescale activation by parent bias scaling
Quantization
Input: multiply by (2^n-1)-1 (singed) or (2^n-1) (unsigned)
Weights: Multiply by (2^n-1)-1 + round + store as a signed integer of nbbits size
Biases: Multiply by (2^n-1)-1 + Multiply by (2^n-1)-1 (singed) or (2^n-1) (unsigned) + store as a signed integer of nbbits size
Activation scaling:
Input scaling: (2^n-1)-1 + Multiply by (2^n-1)-1 (singed) or (2^n-1) (unsigned)
Output scaling: (2^n-1)-1 (singed) or (2^n-1) (unsigned)
Activation sclaing : divide by (input scaling divided bu output_scaling)
Activaiton clipping
generate histogram of outputs for each layer (using validation dataset)
MSE
TODO : add steps to do this
KL-Divergence
TODO : add steps to do this
Weights clipping
TODO : add steps to do this
Better scaling operation
Fixed-point scaling
Single shift scaling
Double shift scaling
Bind method in Python
LeNet integration test
Refactor OutputRange hook to use Tensor Getter/Setter
Add docstring to OutputRange hook
Refactor hook system to not use registrar ?
Edited
1 year ago
by
Cyril Moineau
0
0
Merge request reports
Viewing commit
2202cd0f
Prev
Next
Show latest version
2 files
+
2
−
2
Inline
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
Files
2
Search (e.g. *.vue) (Ctrl+P)
2202cd0f
Reflect hook naming changes.
· 2202cd0f
Cyril Moineau
authored
1 year ago
python_binding/pybind_QuantPTQ.cpp
+
1
−
1
Options
@@ -15,7 +15,7 @@
#include
<string>
#include
"aidge/QuantPTQ.hpp"
#include
"aidge/hook/
h
ook.hpp"
#include
"aidge/hook/
H
ook.hpp"
namespace
py
=
pybind11
;
Loading