Skip to content

Resolve "Optimizer to update gradients"

Maxence Naud requested to merge create_optimizer into learning

Closes #77 (closed)

Optimizer and LRScheduler classes moved to aidge_learning module. Add everything needed for Optimizer.

Expected user interface

# manage data
MyData = DataBase("path/to/dataset", transformations=[resize, normalize,])
MyProvider = DataProvider(myData, batchsize = 8)

# manage model
model = load_onnx("path/to/model")
model.compile("cpu", aidge.dtype.float32)
model.instanciate_grad() # create ssociated gradient Tensor
model.compile_grad() # set Tensor / backward Operator backend and data type

# manage optimization
lr = ConstantLR(10e-3) # set up learning rate
opt = SGD(0.9, 0.1) # set up optimizer: momentum + dampening
# manage optimization parameters
opt.set_parameters(model.parameters())
opt.set_learning_rate(lr)

sch = scheduler(model)
for x1, x2, label in myProvider:
    y = sch.forward([x1,x2]) # x1 sur  G.node1.input(0) et x2 sur G.node23.input(4)
    l = myLoss(y, label)
    opt.zero_grad()
    sch.backward(l)
    opt.update()

Solution

The choice was made to create an abstract class Optimizer with derived class like Adam, SGD, ...

Changes

  • Add operator+,-,*,/ to Tensor class #79 (closed)
  • Add mBackend variable to OperatorImpl #20 (closed)
  • Change mBackend varaibe in TensorImpl type from 'const char*' to 'std::string' #2 (closed)
  • Initialization of gradient Tensor associated to a Tensor instance
  • Minor changes
    • remove <iostream> include where possible
    • add default parameters to GraphView::compile() member function
    • remove Tensor.hpp dependence from Operators when possible
    • add Tensor.hpp in binding of operators
  • update producers() and parameters() functions to return Tensors instead of Nodes
Edited by Maxence Naud

Merge request reports

Loading