Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
aidge_core
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Cyril Moineau
aidge_core
Commits
443fc8d6
Commit
443fc8d6
authored
1 year ago
by
Olivier BICHLER
Browse files
Options
Downloads
Patches
Plain Diff
Corrected wrong behavior when cascading refCast and refFrom
parent
3f16a3f7
No related branches found
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
include/aidge/data/Tensor.hpp
+3
-3
3 additions, 3 deletions
include/aidge/data/Tensor.hpp
src/data/Tensor.cpp
+27
-15
27 additions, 15 deletions
src/data/Tensor.cpp
with
30 additions
and
18 deletions
include/aidge/data/Tensor.hpp
+
3
−
3
View file @
443fc8d6
...
...
@@ -723,8 +723,8 @@ class Tensor : public Data,
* Return a reference to a Tensor on desired data type and backend/device:
* - itself, if already with the right characteristics;
* - the provided Tensor, overwritten with the copy-casted data.
* If required, fallback is always allocated on
current
(destination)
*
Tensor's
device.
* If required, fallback is always allocated on
desired
(destination)
* device.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* The shared_ptr does not need to be initialized. No new memory allocation
* will occur if fallback has already been allocated with the right
...
...
@@ -735,7 +735,7 @@ class Tensor : public Data,
* @return Reference to either itself or to fallback.
*/
Tensor
&
refCastFrom
(
std
::
shared_ptr
<
Tensor
>&
fallback
,
const
Aidge
::
DataType
&
dt
,
const
std
::
string
&
backend
,
int
device
=
0
)
{
// First refFrom, to ensure that fallback, if required, is
on current Tensor's
device
// First refFrom, to ensure that fallback, if required, is
also on desired
device
return
refFrom
(
fallback
,
backend
,
device
).
refCast
(
fallback
,
dt
);
}
...
...
This diff is collapsed.
Click to expand it.
src/data/Tensor.cpp
+
27
−
15
View file @
443fc8d6
...
...
@@ -52,17 +52,23 @@ const Aidge::Tensor& Aidge::Tensor::refCast(std::shared_ptr<Tensor>& fallback, c
return
*
this
;
}
else
{
if
(
!
fallback
)
{
fallback
=
std
::
make_shared
<
Tensor
>
(
dt
);
if
(
this
==
fallback
.
get
())
{
// if refFrom() was called before, just change the type
fallback
->
setDataType
(
dt
);
}
else
{
fallback
->
setDataType
(
dt
,
false
);
// don't keep previous data (no copy)
}
if
(
!
fallback
)
{
fallback
=
std
::
make_shared
<
Tensor
>
(
dt
);
}
else
{
fallback
->
setDataType
(
dt
,
false
);
// don't keep previous data (no copy)
}
const
auto
device
=
getImpl
()
->
device
();
fallback
->
setBackend
(
device
.
first
,
device
.
second
,
false
);
// don't keep previous data (no copy)
fallback
->
resize
(
dims
());
fallback
->
getImpl
()
->
copyCast
(
getImpl
()
->
rawPtr
(),
size
(),
dataType
());
const
auto
device
=
getImpl
()
->
device
();
fallback
->
setBackend
(
device
.
first
,
device
.
second
,
false
);
// don't keep previous data (no copy)
fallback
->
resize
(
dims
());
fallback
->
getImpl
()
->
copyCast
(
getImpl
()
->
rawPtr
(),
size
(),
dataType
());
}
return
*
fallback
;
}
}
...
...
@@ -79,16 +85,22 @@ const Aidge::Tensor& Aidge::Tensor::refFrom(std::shared_ptr<Tensor>& fallback, c
return
*
this
;
}
else
{
if
(
!
fallback
)
{
fallback
=
std
::
make_shared
<
Tensor
>
(
dataType
());
if
(
this
==
fallback
.
get
())
{
// if refCast() was called before, just change the backend
fallback
->
setBackend
(
backend
,
device
);
}
else
{
fallback
->
setDataType
(
dataType
(),
false
);
// don't keep previous data (no copy)
}
if
(
!
fallback
)
{
fallback
=
std
::
make_shared
<
Tensor
>
(
dataType
());
}
else
{
fallback
->
setDataType
(
dataType
(),
false
);
// don't keep previous data (no copy)
}
fallback
->
setBackend
(
backend
,
device
,
false
);
// don't keep previous data (no copy)
fallback
->
resize
(
dims
());
fallback
->
getImpl
()
->
copyFrom
(
*
getImpl
(),
size
());
fallback
->
setBackend
(
backend
,
device
,
false
);
// don't keep previous data (no copy)
fallback
->
resize
(
dims
());
fallback
->
getImpl
()
->
copyFrom
(
*
getImpl
(),
size
());
}
return
*
fallback
;
}
}
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment