actually the resize function change Tensor dimensions but does not touch TensorImpl which is incorrect.
This must be fixed but what is the precise expected behavior? Noticeably, should existing data be kept (I propose not to, too many case where it is meaningless).
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
Indeed, I think it may be difficult for every implementation to garantee that the existing data is kept. Resize could therefore just reset the TensorImpl pointer.
I'm not sure that we should rely on lazyInit().
It happens to work now but lazy initialization is not a requirement so far and it implies an overhead on storage access.
I still think that resize must be fixed (fixed in !13 (closed)).
I also propose to document that resize invalidate existing data, whatever the scenario.
Regarding your last point, I agree and propose to add the following in resize() specification (see !57 (merged)):
* If the overall size is not changed (meaning we actually only performed a * reshape), data is garanteed to remain valid. * Otherwise, no garantee is provided regarding the validy of previous data * (unlike std::vector). If the new overall size is larger than the previous * one, all previous data is invalided. Otherwise, previous data may or may * not remain valid, depending on the backend implementation.
If the overall size is not changed, however, I think it is important to garantee that data remain valid.
Regarding lazyInit(), I think it helps decoupling Tensor from TensorImpl by delegating the management of Tensor's capacity entirely to TensorImpl regardless of what happens at a higher level (see !57 (merged) which clarifies the role of TensorImpl). I don't think the overhead would be a major issue here.
Regarding resize. I see your point but wouldn't it be clearer to have 2 different functions, resize and reshape: the intent are different and, though your proposal is possible, I would think clearer to have different functions (it's the way matlab goes for instance). I would be possibly more "violent" but removing all guarantees to resize (even if you keep the same dimensions, you may loose your data).
Regarding lazyInit(), what's bothering me the most is the fact that it implies a test at each storage access. I would be in favor of an explicit call or a call on TensorImpl creation. I'm currently unsure about what would be the best solution for actual memory allocation.
resize() behavior has been clarified in MR !69 (merged).
const access to the storage does not call lazyInit() and is therefore garanteed to not trigger a memory allocation, but it implies nevertheless a bounding check, which is a minimal overhead for safety.
Closing the issue. The need to distinguish between resize() and reshape() may be discussed in a new dedicated issue.