-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adding expression templates support for norm
, dot
, and cross
products on vector quantities
#463
Comments
norm
, dot
, and cross
products on vector quantities
How to support things like |
I think my off the cuff reaction is that I would personally find it hard to be confident I was getting this right. Take this with a grain of salt, since I'm also not at all caught up on how vector and tensor "characters" work. 🙂 But let's take the cross product as an example. The vector cross product is a kludge that only makes sense in 3 dimensions. It's the dual of the wedge product, a more general and better-founded product which works in arbitrary dimensions. (One of my favorite books on this is Geometric Algebra for Computer Science. This gets discussed in chapter 3.) With the cross product, you end up with something that "kinda" acts like a vector for some operations, but not for others. For example, under inversion, "real" vectors are negated, but cross products aren't. I think the usual way to phrase this is to call the cross product a "pseudo-vector". So, besides the "vector" character, should we have a "pseudo-vector" character? And if so, how about pseudo-scalars and pseudo-tensors? I do see some appeal here, some benefit to figuring it all out. For example, based on the transformation properties, it seems clear that no operation which adds vectors and pseudo-vectors can be meaningful... right? (Does anyone have a counter-example?) If so, then it'd be great if we could catch this at compile time! But then again, many (most?) users may not even know when they have a pseudo-vector instead of a "real" vector, so this could lead to frustrating compile time errors. (For example, normal vectors are pseudo-vectors, but users are very likely to reach for "regular" vectors for these.) tl;dr: I know just enough to recognize that there's significant complexity lurking here, but not enough to know how to handle it. |
Well, yes, it seems that is tough indeed, but said that the physical quantities library is easy 😨. I thought about this for a while, and I am not sure if there is any other solution if we want to do it right... Let's assume a user that tries to do it correctly, and not just to provide slideware code that forgives everything. By this, I mean a user that uses a break linear algebra library to express vector quantities like I might be wrong, but I think that none of the libraries on the market actually has unit tests involving real linear algebra as underlying types for vector and tensor quantities. That is why we do not see bugs for that, and users suffer or do not know they can do any better. Writing unit tests like:
is not enough and is simply wrong. But this exactly is what we have unfortunately done for years 😢 |
It seems that Pint supports those operations already: https://pint.readthedocs.io/en/stable/user/numpy.html#Function/Method-Support. |
We all expect that
length * time
will not result inspeed
because we clearly have an error in calculation.Dot and cross products in vector quantities domain are really similar operations to
*
and/
in the scalar domain. Using a wrong operation by mistake results in a different quantity than expected. Not having support for it makes the library not safe for projects that work on vector quantities.As the V2 library has now experimental support for vector and tensor quantities as well, it could be a good idea to add the above to improve safety. Also, it would improve ISQ system specification as we will not have to override the quantity character for derived quantities, i.e.:
The above could be rewritten as:
Please do share your ideas or comments.
The text was updated successfully, but these errors were encountered: