We discovered a potential bug in the collision detection. In collisionDetection_Impact_implementation.cpp line 122 the norm of a vector is calculated. If the two polygons are very close (i.e. two corners are very close but not equal) the length of this vector is smaller than the epsilon (10^-9) and Norm() throws an error. This happened because the calculation of the bounding boxes of the two agents gave different x,y coordinates due to rounding.
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
I believe the collision detection as a whole is currently faulty, not just the edge case where two corners almost overlap (which crashes).
I believe this because there is a mismatch between the comment at line 116 and the code that follows:
// calculate normalized normal on polygon edge and distance for Hesse normal formCommon::Vector2d<units::length::meter_t>normal2;normal2.x=corners2[indices22].y-corners2[indices21].y;normal2.y=-(corners2[indices22].x-corners2[indices21].x);normal2.Norm();autodistance2=normal2.Dot(corners2[indices21]);
The Hesse normal form expresses a line using its normalized normal, but normal2 is not normalized because Vector2d<T>::Norm() computes the norm of the vector it is called on and returns the result without changing the vector it was called on. In other words, the result of normal2.Norm() is erroneously being discarded.
What's peculiar is that just 15 lines previously, normal1 is normalized correctly:
// calculate normalized normal on polygon edge and distance for Hesse normal formCommon::Vector2d<units::length::meter_t>normal1;normal1.x=corners1[indices12].y-corners1[indices11].y;normal1.y=-(corners1[indices12].x-corners1[indices11].x);autotmpNormalizedVector=normal1.Norm();normal1.x=units::length::meter_t(tmpNormalizedVector.x.value());normal1.y=units::length::meter_t(tmpNormalizedVector.y.value());autodistance1=normal1.Dot(corners1[indices11]);
It's not pretty (superfluous default initialization of normal1 & units forcing the use of an additional temporary vector), but you can see normal1 being modified a second time.
A clean solution would go beyond just fixing the discarded normalization of normal2.
Suggested changes:
Vector2d<T>::Norm() needs to be const-qualified to communicate that it does not modify any data members
Vector2d<T>::Norm() needs to be marked [[nodiscard]] to further rule out the expectation of in-place modification despite its ambiguous name
Vector2d<T>::Norm() computes the length of a vector twice. Ideally, checking that the input is normalizable should fall outside the responsibility of the normalization function, but if the check remains, it should re-use the computed length instead of calling HasNoLength()
The documentation of Vector2d<T>::Norm() needs to be corrected
Vector2d<T>::HasNoLength() should compare the squared length of the vector (dot product with itself) and the squared epsilon to avoid the performance cost of taking a square root
Since the collision implementation has to check whether normal2 can be normalized before doing so, normal2.Norm(); should be replaced with something like this: