-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tensor type to cudaq. #2304
base: main
Are you sure you want to change the base?
Conversation
@anthony-santana @sacpis Is this the interface you were looking for? (makes Jedi hand motion.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EDIT: removed outdated comment
Signed-off-by: Alex McCaskey <[email protected]>
Signed-off-by: Alex McCaskey <[email protected]>
Command Bot: Processing... |
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Co-authored-by: Ben Howe <[email protected]> Signed-off-by: Eric Schweitz <[email protected]>
Co-authored-by: Ben Howe <[email protected]> Signed-off-by: Eric Schweitz <[email protected]>
Co-authored-by: Ben Howe <[email protected]> Signed-off-by: Eric Schweitz <[email protected]>
Co-authored-by: Ben Howe <[email protected]> Signed-off-by: Eric Schweitz <[email protected]>
Command Bot: Processing... |
Signed-off-by: Anna Gringauze <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Command Bot: Processing... |
Signed-off-by: Anna Gringauze <[email protected]>
Command Bot: Processing... |
Signed-off-by: Anna Gringauze <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Command Bot: Processing... |
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Eric Schweitz <[email protected]>
Signed-off-by: Anna Gringauze <[email protected]>
Command Bot: Processing... |
the ease of use. Signed-off-by: Eric Schweitz <[email protected]>
throw std::runtime_error(info->name()); | ||
} | ||
|
||
EXPECT_THROW(t.at({2, 0}), std::runtime_error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test fails in CI Create CUDA Quantum installer (xxx) / Build CUDA Quantum assets
task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to disable.
…c gen Signed-off-by: Anna Gringauze <[email protected]>
Command Bot: Processing... |
Before merging this, you should add a test (and get it working) for using cudaq::tensor in an application file compiled with |
Command Bot: Processing... |
I gave it a try in amccaskey#6, let me know if this is in the right direction. |
dd_add(const tensor_impl<Scalar> &left) const = 0; | ||
|
||
// Terminal implementation of operators. | ||
virtual tensor_impl<Scalar> *multiply(const xtensor<Scalar> &right) const = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the use of xtensor intended here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. This is using the old-school double dispatch pattern to determine the two input tensors have the same implementation. This is needed so that our matrix multiply code can efficiently (and correctly) access both of the input matrices to compute the product.
I'm thinking this implementation hierarchy ought to get rid of the virtual methods and be redone using the curiously recurring template pattern. However, I left it as it was to this point.
No description provided.