Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
aidge_core
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Eclipse Projects
aidge
aidge_core
Commits
d01920a4
Commit
d01920a4
authored
1 year ago
by
Cyril Moineau
Browse files
Options
Downloads
Patches
Plain Diff
[IMP] Tensor size is always set to be >=1.
parent
b4286319
No related branches found
Branches containing commit
No related tags found
Tags containing commit
1 merge request
!67
dev
Pipeline
#36828
failed
1 year ago
Stage: static_analysis
Stage: build
Stage: test
Stage: coverage
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
include/aidge/data/Tensor.hpp
+25
-32
25 additions, 32 deletions
include/aidge/data/Tensor.hpp
with
25 additions
and
32 deletions
include/aidge/data/Tensor.hpp
+
25
−
32
View file @
d01920a4
...
@@ -15,7 +15,7 @@
...
@@ -15,7 +15,7 @@
#include
<cstring>
#include
<cstring>
#include
<set>
#include
<set>
#include
<memory>
#include
<memory>
#include
<numeric>
#include
<numeric>
// std::accumulate
#include
<string>
#include
<string>
#include
<vector>
#include
<vector>
...
@@ -327,11 +327,11 @@ class Tensor : public Data,
...
@@ -327,11 +327,11 @@ class Tensor : public Data,
/**
/**
* @brief Change the dimensions of the Tensor object according to the given argument.
* @brief Change the dimensions of the Tensor object according to the given argument.
* If the overall size is not changed (meaning we actually only performed a
* If the overall size is not changed (meaning we actually only performed a
* reshape), data is garanteed to remain valid.
* reshape), data is garanteed to remain valid.
* Otherwise, no garantee is provided regarding the validy of previous data
* Otherwise, no garantee is provided regarding the validy of previous data
* (unlike std::vector). If the new overall size is larger than the previous
* (unlike std::vector). If the new overall size is larger than the previous
* one, all previous data is invalided. Otherwise, previous data may or may
* one, all previous data is invalided. Otherwise, previous data may or may
* not remain valid, depending on the backend implementation.
* not remain valid, depending on the backend implementation.
* @tparam DIM Number of dimensions.
* @tparam DIM Number of dimensions.
* @param dims New dimensions
* @param dims New dimensions
...
@@ -343,11 +343,11 @@ class Tensor : public Data,
...
@@ -343,11 +343,11 @@ class Tensor : public Data,
/**
/**
* @brief Change the dimensions of the Tensor object according to the given argument.
* @brief Change the dimensions of the Tensor object according to the given argument.
* If the overall size is not changed (meaning we actually only performed a
* If the overall size is not changed (meaning we actually only performed a
* reshape), data is garanteed to remain valid.
* reshape), data is garanteed to remain valid.
* Otherwise, no garantee is provided regarding the validy of previous data
* Otherwise, no garantee is provided regarding the validy of previous data
* (unlike std::vector). If the new overall size is larger than the previous
* (unlike std::vector). If the new overall size is larger than the previous
* one, all previous data is invalided. Otherwise, previous data may or may
* one, all previous data is invalided. Otherwise, previous data may or may
* not remain valid, depending on the backend implementation.
* not remain valid, depending on the backend implementation.
* @param dims New dimensions
* @param dims New dimensions
*/
*/
...
@@ -424,7 +424,7 @@ class Tensor : public Data,
...
@@ -424,7 +424,7 @@ class Tensor : public Data,
return
std
::
string
(
"?"
);
// To make Clang happy
return
std
::
string
(
"?"
);
// To make Clang happy
};
};
if
(
dims
().
empty
())
{
return
"{}"
;
}
if
(
dims
().
empty
())
{
return
ptrToString
(
mDataType
,
mImpl
->
hostPtr
(),
0
)
;
}
std
::
string
res
;
std
::
string
res
;
std
::
size_t
dim
=
0
;
std
::
size_t
dim
=
0
;
std
::
size_t
counter
=
0
;
std
::
size_t
counter
=
0
;
...
@@ -546,22 +546,22 @@ class Tensor : public Data,
...
@@ -546,22 +546,22 @@ class Tensor : public Data,
/**
/**
* Copy-cast data from a Tensor.
* Copy-cast data from a Tensor.
* @param src Source tensor to copy-cast from.
* @param src Source tensor to copy-cast from.
* @param movedSrc shared_ptr to an indermediate Tensor that will
* @param movedSrc shared_ptr to an indermediate Tensor that will
* contain the moved data if a device change should occur AND a type
* contain the moved data if a device change should occur AND a type
* conversion is necessary (otherwise it remains unused).
* conversion is necessary (otherwise it remains unused).
* Any data already present will be overwritten. No new memory allocation
* Any data already present will be overwritten. No new memory allocation
* will occur if movedSrc has already been allocated with the right
* will occur if movedSrc has already been allocated with the right
* type/size/device.
* type/size/device.
* If required, memory is always allocated on current (destination)
* If required, memory is always allocated on current (destination)
* Tensor's device.
* Tensor's device.
*/
*/
void
copyCastFrom
(
const
Tensor
&
src
,
std
::
shared_ptr
<
Tensor
>&
movedSrc
);
void
copyCastFrom
(
const
Tensor
&
src
,
std
::
shared_ptr
<
Tensor
>&
movedSrc
);
/**
/**
* Copy-cast data from a Tensor.
* Copy-cast data from a Tensor.
* In case of both a device change AND a data type conversion, an
* In case of both a device change AND a data type conversion, an
* intermediate buffer on will be allocated and deallocated each time.
* intermediate buffer on will be allocated and deallocated each time.
* If required, buffer's memory is always allocated on current (destination)
* If required, buffer's memory is always allocated on current (destination)
* Tensor's device.
* Tensor's device.
* @param src Source tensor to copy-cast from.
* @param src Source tensor to copy-cast from.
*/
*/
...
@@ -579,7 +579,7 @@ class Tensor : public Data,
...
@@ -579,7 +579,7 @@ class Tensor : public Data,
* The backend stays the same.
* The backend stays the same.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* The shared_ptr does not need to be initialized. No new memory allocation
* The shared_ptr does not need to be initialized. No new memory allocation
* will occur if fallback has already been allocated with the right
* will occur if fallback has already been allocated with the right
* type/size/device.
* type/size/device.
* @param dt The desired data type.
* @param dt The desired data type.
* @return Reference to either itself or to fallback.
* @return Reference to either itself or to fallback.
...
@@ -594,7 +594,7 @@ class Tensor : public Data,
...
@@ -594,7 +594,7 @@ class Tensor : public Data,
* The data type stays the same.
* The data type stays the same.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* The shared_ptr does not need to be initialized. No new memory allocation
* The shared_ptr does not need to be initialized. No new memory allocation
* will occur if fallback has already been allocated with the right
* will occur if fallback has already been allocated with the right
* type/size/device.
* type/size/device.
* @param backend The desired backend.
* @param backend The desired backend.
* @param device The desired device.
* @param device The desired device.
...
@@ -607,11 +607,11 @@ class Tensor : public Data,
...
@@ -607,11 +607,11 @@ class Tensor : public Data,
* Return a reference to a Tensor on desired data type and backend/device:
* Return a reference to a Tensor on desired data type and backend/device:
* - itself, if already with the right characteristics;
* - itself, if already with the right characteristics;
* - the provided Tensor, overwritten with the copy-casted data.
* - the provided Tensor, overwritten with the copy-casted data.
* If required, fallback is always allocated on desired (destination)
* If required, fallback is always allocated on desired (destination)
* device.
* device.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* The shared_ptr does not need to be initialized. No new memory allocation
* The shared_ptr does not need to be initialized. No new memory allocation
* will occur if fallback has already been allocated with the right
* will occur if fallback has already been allocated with the right
* type/size/device.
* type/size/device.
* @param dt The desired data type.
* @param dt The desired data type.
* @param backend The desired backend.
* @param backend The desired backend.
...
@@ -628,11 +628,11 @@ class Tensor : public Data,
...
@@ -628,11 +628,11 @@ class Tensor : public Data,
* (data type, backend/device) as targetReqs Tensor:
* (data type, backend/device) as targetReqs Tensor:
* - itself, if already with the right characteristics;
* - itself, if already with the right characteristics;
* - the provided Tensor, overwritten with the copy-casted data.
* - the provided Tensor, overwritten with the copy-casted data.
* If required, fallback is always allocated on current (destination)
* If required, fallback is always allocated on current (destination)
* Tensor's device.
* Tensor's device.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* @param fallback A shared_ptr to Tensor ready to be overwritten if necessary.
* The shared_ptr does not need to be initialized. No new memory allocation
* The shared_ptr does not need to be initialized. No new memory allocation
* will occur if fallback has already been allocated with the right
* will occur if fallback has already been allocated with the right
* type/size/device.
* type/size/device.
* @param targetReqs Tensor with the desired target characteristics.
* @param targetReqs Tensor with the desired target characteristics.
* @return Reference to either itself or to fallback.
* @return Reference to either itself or to fallback.
...
@@ -644,15 +644,8 @@ class Tensor : public Data,
...
@@ -644,15 +644,8 @@ class Tensor : public Data,
private
:
private
:
///\bug not protected against overflow
///\bug not protected against overflow
std
::
size_t
computeSize
()
{
void
computeSize
()
{
if
(
mDims
.
empty
())
{
mSize
=
std
::
accumulate
(
mDims
.
begin
(),
mDims
.
end
(),
DimSize_t
(
1
),
std
::
multiplies
<
DimSize_t
>
());
mSize
=
DimSize_t
(
0
);
}
else
{
mSize
=
std
::
accumulate
(
mDims
.
begin
(),
mDims
.
end
(),
DimSize_t
(
1
),
std
::
multiplies
<
DimSize_t
>
());
}
return
mSize
;
}
}
};
};
}
// namespace Aidge
}
// namespace Aidge
...
...
This diff is collapsed.
Click to expand it.
Cyril Moineau
@cmoineau
mentioned in issue
#70 (closed)
·
1 year ago
mentioned in issue
#70 (closed)
mentioned in issue #70
Toggle commit list
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment