Skip to content
Snippets Groups Projects

Implement backward function of Add operator

Merged Jerome Hue requested to merge jeromeh/aidge_backend_cpu:add-backward into dev
2 unresolved threads

Context

Add backward kernel for Add.

partially fix #30 (closed)

Merge request reports

Code Quality is loading
Test summary results are being parsed

Merged by Jerome HueJerome Hue 1 month ago (Feb 26, 2025 7:50am UTC)

Loading

Pipeline #66393 passed

Pipeline passed for 5ea40966 on dev

Test coverage 84.05% (0.15%) from 2 jobs

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
179 std::vector<std::size_t> idxInput1(broadcastedDims1.size());
180
181 for (std::size_t dimension = 0; dimension < broadcastedDims0.size(); ++dimension) {
182 idxInput0[dimension] = (broadcastedDims0[dimension] == 1) ? 0 : idxOutputGrad[dimension];
183 }
184
185 for (std::size_t dimension = 0; dimension < broadcastedDims1.size(); ++dimension) {
186 idxInput1[dimension] = (broadcastedDims1[dimension] == 1) ? 0 : idxOutputGrad[dimension];
187 }
188
189 auto idx0 = getFlattenedIndex(broadcastedDims0, idxInput0);
190 auto idx1 = getFlattenedIndex(broadcastedDims1, idxInput1);
191
192 // For addition: gradient of both inputs is just the output gradient
193 // (unlike multiplication where we need to multiply by the other input,
194 // or subtraction where we need to negate one of them)
  • Comment on lines +192 to +194

    Since the inputs' gradient is just an accumulation of the output's gradient over broadcasted dimensions, this kernel might be vastly optimized :slight_smile:

  • Please register or sign in to reply
  • Maxence Naud changed milestone to %aidge v0.6.0

    changed milestone to %aidge v0.6.0

  • Jerome Hue mentioned in merge request !145 (merged)

    mentioned in merge request !145 (merged)

  • Jerome Hue mentioned in issue #45

    mentioned in issue #45

  • Please register or sign in to reply
    Loading