Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement missing QoS when using intraprocess communication #752

Open
ivanpauno opened this issue Jun 5, 2019 · 8 comments
Open

Implement missing QoS when using intraprocess communication #752

ivanpauno opened this issue Jun 5, 2019 · 8 comments
Labels
backlog enhancement New feature or request

Comments

@ivanpauno
Copy link
Member

Feature request

Feature description

Implement KEEP_ALL history policy with intraprocess communication (see #727).
Implement non-volatile durability with intraprocess communication.

Implementation considerations

For sizing the buffers when KEEP_ALL history policy is used, resource limits qos policy will be needed (ros2/rmw#176).

For implementing TRANSIENT_LOCAL durability:

  • With history keep_last, the intra_process_manager could store the last message send of each publisher. Adding a map, with publisher_id as keys and storing a shared_ptr to the message sounds reasonable.
  • With history KEEP_ALL, the intra_process_manager should never pop from the internal buffer.
  • When a new subscription is created with TRANSIENT_LOCAL durability, all the messages from publishers in the same topic with TRANSIENT_LOCAL durability should be delivered (depending on subscriber and publisher history policy if the last of all the messages are delivered). When the publisher history policy was keep_last, if the message is not found in the buffer (because it was popped), it should be taken from the last_message map (mentioned above).
@ivanpauno ivanpauno added the enhancement New feature or request label Jun 5, 2019
@alsora
Copy link
Collaborator

alsora commented Jun 30, 2021

Hi, resurrecting a very old ticket here, is there any blocker preventing this to be implemented?

@wjwwood
Copy link
Member

wjwwood commented Jul 1, 2021

Nothing prevents it from being worked on, but I think we'll need to introduce some new QoS elements as @ivanpauno's post mentions. KEEP_ALL isn't enough information to emulate what the middleware is doing, at least dds middlewares.

@msmcconnell
Copy link

Any movement on this? The lack of TRANSIENT_LOCAL effectively prevents latching on components which is a major limitation.

@clalancette
Copy link
Contributor

Any movement on this? The lack of TRANSIENT_LOCAL effectively prevents latching on components which is a major limitation.

It needs someone to pick it up and implement it.

While I agree that this would be a good feature to have, I'm not sure that it is a major limitation when using intra-process comms. The main reason to use latching is to keep extraneous data off of the wire and not to spend time doing unnecessary publishes when nothing has changed. But since intra-process comms is so efficient, and doesn't send anything over the wire, just repeatedly publishing should accomplish more or less the same thing.

@msmcconnell
Copy link

@clalancette You say the main purpose of latching is to prevent extraneous data on the wire, but the use case I have always seen is focused on supporting late joining nodes without impacting already running nodes. If I was to continuously send the same old message it might not incur much over the wire cost but it would mean all receiving nodes would need custom logic to reject the old message otherwise they would incur the cost of reprocessing the data.

@alsora
Copy link
Collaborator

alsora commented May 13, 2022

I agree on the importance of transient local for the reasons mentioned by @msmcconnell.
iRobot may be able to take on this task in the near future (June maybe?)

Keep All is a different story and it's not clear what should be the expected behavior there, but I'm not sure if it's really useful.

@clalancette
Copy link
Contributor

Yes, I agree that we should have the feature, and it will be more optimal. I just don't see it as entirely blocking any particular use case.

nnmm pushed a commit to ApexAI/rclcpp that referenced this issue Jul 9, 2022
* Add nullptr tests get_param_files
* Add bad alloc tests
* Add missing tests
* Add bad alloc tests rcl_arguments_copy
* Remove repeated test
* Remove spaces
* Restore erased test
* Relocate test
* Refactor bomb allocator test
* Add missing rcl_reset_error() checks

Signed-off-by: Jorge Perez <[email protected]>
@clalancette
Copy link
Contributor

This was partially completed with #2303.

I think we are still missing support for KEEP_ALL, as well as depth in the intra-process manager, so I'll leave this open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants