Skip to content
Snippets Groups Projects
Select Git revision
  • ForwardDtype
  • add_clipping_node
  • add_metaops_bwd
  • chore_clean-node-and-graphview
  • chore_move_broadcasting_to_core
  • copytape
  • custom_pybind
  • dataProvider_setBackend
  • dev protected
  • experiment_with_pipeline
  • experimental
  • feat/formatting
  • feat_aidge197_namespaces
  • feat_benchmark
  • feat_enhance_operator_resize_support
  • feat_export_refactor
  • feat_hw_model
  • fit_fmt
  • fix protected
  • fixGraphRegexUnique
  • v0.5.1
  • v0.5.0
  • v0.4.0
  • v0.3.1
  • v0.3.0
  • v0.2.2
  • v0.2.1
  • v0.2.0
  • v0.1.1
  • v0.1.0
30 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.024Mar22212019181513121198765432129Feb27262523222120191816151413121197652131Jan3029262524231817161512111085418Dec1514131211109876543130Nov2928272423222120171615Minor Mermaid fix for older versionsAdded unit testMerge branch 'Filler' into 'dev'Merge branch 'learning' of gitlab.eclipse.org:eclipse/aidge/aidge_core into learningFix Producer constructor with pybindFix python testsfix ReduceMean.hpp initializer_list as vectorFix python binding and add Tensor.hpp in includes for opertorsFix OperatorTensor.cppFix includesMove Optimizer and LR in their own module[Add][Test] Unit-test for LRScheduler[Add] ConstantLR and StepLR[Add] learning rate scheduler class LRScheduler[Fix] 'Cast.cpp' includesMinor optimizations and add default values to 'GraphView::compile()' member functionRemove iostream include from some filesUpd GraphViewHelper functions to return Tensors instead of Nodes[Add] operator+,-,*,/ to Tensor class and [Add] gradient initialization for TensorChange 'TensorImpl' mBackend member variable type and move member functions to source file[Add] Backend member variable to Operator implementations[Add] 'Operator::backend()' member function and move Tensor dependence from header to source file when possible in operatorMove Optimizer and LR in their own module[Add][Test] Unit-test for LRScheduler[Add] ConstantLR and StepLR[Add] learning rate scheduler class LRSchedulerUpdate argument name to avoid ambiguous name.Add a Random Generator handler class and use it for fillers.Fix typo Tensor.hppMove filler definition in cpp files.Add heFiller.Add Normal Filler.Add basic normal filler.Add basic Uniform filler.Fixed issue with fuseMulAddMerge remote-tracking branch 'origin/dev' into learningchore : change type of nodes generatedMerge branch 'create_optimizer' into 'learning'Fix Producer constructor with pybindFix python tests
Loading