Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tensor type to cudaq. #2304

Open
wants to merge 78 commits into
base: main
Choose a base branch
from
Open

Add tensor type to cudaq. #2304

wants to merge 78 commits into from

Conversation

amccaskey
Copy link
Collaborator

No description provided.

Copy link

copy-pr-bot bot commented Oct 21, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@amccaskey amccaskey marked this pull request as draft October 21, 2024 14:04
@schweitzpgi
Copy link
Collaborator

@anthony-santana @sacpis Is this the interface you were looking for? (makes Jedi hand motion.)

Copy link
Collaborator

@anthony-santana anthony-santana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EDIT: removed outdated comment

Signed-off-by: Alex McCaskey <[email protected]>
Signed-off-by: Alex McCaskey <[email protected]>
python/runtime/utils/py_tensor.cpp Show resolved Hide resolved
runtime/cudaq/utils/details/impls/xtensor_impl.cpp Outdated Show resolved Hide resolved
runtime/cudaq/utils/details/impls/xtensor_impl.cpp Outdated Show resolved Hide resolved
runtime/cudaq/utils/details/impls/xtensor_impl.cpp Outdated Show resolved Hide resolved
runtime/cudaq/utils/tensor.h Outdated Show resolved Hide resolved
runtime/cudaq/utils/type_traits.h Show resolved Hide resolved
unittests/utils/tensor_tester.cpp Outdated Show resolved Hide resolved
unittests/utils/tensor_tester.cpp Outdated Show resolved Hide resolved
unittests/utils/tensor_tester.cpp Outdated Show resolved Hide resolved
@schweitzpgi
Copy link
Collaborator

schweitzpgi commented Oct 22, 2024

/ok to test

Command Bot: Processing...

@annagrin
Copy link
Collaborator

annagrin commented Oct 24, 2024

/ok to test

Command Bot: Processing...

@annagrin
Copy link
Collaborator

annagrin commented Oct 24, 2024

/ok to test

Command Bot: Processing...

Signed-off-by: Anna Gringauze <[email protected]>
@annagrin
Copy link
Collaborator

annagrin commented Oct 24, 2024

/ok to test

Command Bot: Processing...

@annagrin
Copy link
Collaborator

annagrin commented Oct 25, 2024

/ok to test

Command Bot: Processing...

@annagrin
Copy link
Collaborator

annagrin commented Oct 25, 2024

/ok to test

Command Bot: Processing...

throw std::runtime_error(info->name());
}

EXPECT_THROW(t.at({2, 0}), std::runtime_error);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test fails in CI Create CUDA Quantum installer (xxx) / Build CUDA Quantum assets task

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to disable.

@annagrin
Copy link
Collaborator

annagrin commented Oct 25, 2024

/ok to test

Command Bot: Processing...

@amccaskey
Copy link
Collaborator Author

Before merging this, you should add a test (and get it working) for using cudaq::tensor in an application file compiled with nvq++. There are issues with regards to compiler compatibility and extension_point static initialization that may show up there. May need to remove extension_point and just use xtensor directly.

@annagrin
Copy link
Collaborator

annagrin commented Oct 25, 2024

/ok to test

Command Bot: Processing...

@annagrin
Copy link
Collaborator

Before merging this, you should add a test (and get it working) for using cudaq::tensor in an application file compiled with nvq++. There are issues with regards to compiler compatibility and extension_point static initialization that may show up there. May need to remove extension_point and just use xtensor directly.

I gave it a try in amccaskey#6, let me know if this is in the right direction.

dd_add(const tensor_impl<Scalar> &left) const = 0;

// Terminal implementation of operators.
virtual tensor_impl<Scalar> *multiply(const xtensor<Scalar> &right) const = 0;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the use of xtensor intended here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is using the old-school double dispatch pattern to determine the two input tensors have the same implementation. This is needed so that our matrix multiply code can efficiently (and correctly) access both of the input matrices to compute the product.

I'm thinking this implementation hierarchy ought to get rid of the virtual methods and be redone using the curiously recurring template pattern. However, I left it as it was to this point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants