Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Change semantics of RMM memory resource equality comparison #1402

Open
Tracked by #16 ...
harrism opened this issue Dec 6, 2023 · 5 comments
Open
Tracked by #16 ...

[FEA] Change semantics of RMM memory resource equality comparison #1402

harrism opened this issue Dec 6, 2023 · 5 comments
Assignees
Labels
feature request New feature or request

Comments

@harrism
Copy link
Member

harrism commented Dec 6, 2023

Part of #1443

We plan to adopt cuda::mr::memory_resource concepts as the basis for RMM memory resources. This refactoring will bring flexibility both in building and composing memory resources (e.g. machinery for our stream-ordered pool_memory_resource can be reused for non-device memory as in #1392 ) and for applications by providing them the means to specify the properties of memory resources that their functions expect to be passed (via the cuda::mr::resource_ref property interface).

One effect of this is that cuda::mr::resource_ref equality comparison has different semantics from RMM's current device_memory_resource equality comparison, which mimics std::pmr::memory_resource. The semantics of this are defined here:

Two memory_resources compare equal if and only if memory allocated from one memory_resource can be deallocated from the other and vice versa.

In contrast, two cuda::mr::resource_refs compare equal only if they have the same equality function pointer, and calling that function on the two resource_refs returns true.

Arguably, the std::pmr semantics are more useful, because, for example, one might allocate memory with a logging_memory_resource<cuda_memory_resource> (A logging MR with its upstream MR being a CUDA MR), but later need to deallocate the memory with just a cuda_memory_resource, and these two will compare equal because the upstream of the former compares equal to the latter.

However, these semantics are difficult to support within the cuda::mr design. Because it is based on concepts, rather than inheritance, there is no base class (unlike std::pmr and rmm::mr::device_memory_resource). Without a base class, we can't dynamic_cast, which makes it hard to do the kind of compatibility comparison described above. RTTI-based solutions may be possible, but we don't necessarily want to require RTTI for RMM.

The concept-based approach does not preclude an inheritance hierarchy as we currently have in RMM. However, one of the advantages of moving to refactor to the concept approach is that libraries can provide pluggable memory interfaces based only on a (fairly lightweight) semi-standard (hopefully someday) like cuda::mr::resource_ref without requiring RMM and instead of defining their own plugin interface.

This was discussed during the design sessions for cuda::mr and no strong opinions were held on the issues above and the designers felt reasonably OK with diverging from std::pmr here.

I wanted to open this up for discussion. First, I want to find out if anyone relies on the "compatibility" equality comparison semantics currently built in RMM. My hunch is that few if any do. Please comment below on this question and then we can move on to discussing options.

@harrism harrism added bug Something isn't working question Further information is requested labels Dec 6, 2023
@harrism harrism moved this from Todo to In Progress in RMM Project Board Dec 6, 2023
@harrism harrism self-assigned this Dec 6, 2023
@bdice
Copy link
Contributor

bdice commented Dec 6, 2023

I'm not aware of any code in libcudf (or in RAPIDS generally) that depends on particular semantics of memory resource equality. I know of no issues from changing semantics as described above. Absence of evidence != evidence of absence, of course, but I hope this is helpful.

@harrism
Copy link
Member Author

harrism commented Dec 7, 2023

Thanks @bdice . I also did some searching of the main RAPIDS repos.

  • I see various comparisons of device_memory_resource* to nullptr, which is not using the operator==, so that's not an issue.
  • I see RAFT explicitly calling is_equal which does exercise the code path, but the code is planned to be removed according to @cjnolet .
  • I see RAFT keeping a std::vector of std::shared_ptr<device_memory_resource>, and also places in RMM's gtests that have vectors of resources, but this should not use the operator==. I would be worried if there were uses of std::set or std::map.

@harrism
Copy link
Member Author

harrism commented Jan 23, 2024

The usage in RAFT has been removed.

@harrism harrism added feature request New feature or request and removed bug Something isn't working question Further information is requested labels Jan 30, 2024
@harrism harrism moved this from In Progress to Todo in RMM Project Board Jan 30, 2024
@harrism harrism removed their assignment Jan 30, 2024
@harrism harrism changed the title [DISCUSSION] Change semantics of RMM memory resource equality comparison [FEA] Change semantics of RMM memory resource equality comparison Jan 30, 2024
@harrism
Copy link
Member Author

harrism commented Feb 20, 2024

@miscco @jrhemstad I think having a concept for get_upstream_resource would also help the usability of equality comparison, because users could call get_upstream_resource if it exists and do comparison of those resources. This way a user could check if the upstream of a logging_mr is the same as some other MR, and can therefore be used to free memory allocated with the logging_mr. Thoughts?

@harrism
Copy link
Member Author

harrism commented May 7, 2024

This may be unnecessary as @miscco and I were discussing. @miscco mentioned that it would be useful for cuda::mr to have a resource_adaptor concept which can be queried and used to implement equality semantics more similar to std::pmr::memory_resource and rmm::mr::device_memory_resource implementations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
Status: Todo
Development

No branches or pull requests

2 participants