You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This feature is needed to properly support back pressure (slow receiver, fast producer) and reliable communication. The current implementation hides theses problems with excessive buffering but that's its own major issue (buffer bloat) that could be solved with RESOURCE_LIMITS.
Implementing this should be coupled with either
an option to rmw_publish with a maximum timeout
or setting the blocking timeout to 0 in the RMW implementation and returning ASAP. This would be my preferred solution to avoid having a parameter being passed around that most users will never need
In either case we need a return code to distinguish the back pressure case (resource limits exceeded) from real RMW errors or add an argument akin to taken from rmw_take.
Without this, rmw_publish can't be used in a cyclic realtime task without hitting some unknown upper execution bound, although the rest of the API can be used in a polling fashion - and errors must be silently ignored.
I have a simple test case for this that showcases the problem and how FastRTPS and CycloneDDS work with their default settings:
FastRTPS uses 5000 messages as RESOURCE_LIMITS by default, which causes 10000 message to be in flight between the publisher and subscriber. When the limit hits, you get a random blocking time (~50-140ms) in rmw_publishor much more likely an unspecific RMW_RET_ERROR
CycloneDDS has unlimited buffering (at least) on the subscriber side, which will cause the subscriber to crash sooner or later when memory runs out. Coupled with a FastRTPS subscriber, the publisher will start blocking without errors, but with seemingly random wait times up to 1000ms in my simple testcase.
Feature request
Feature description
Expose resource limit QoS. If not, KEEP_ALL history policy is not configurable at all.
See comment ros2/rclcpp#727 (comment).
Also, it will be needed if we want to implement KEEP_ALL policy with intraprocess communication (see ros2/rclcpp#727).
The text was updated successfully, but these errors were encountered: