-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-128150: improve performances of uuid.uuid*
constructor functions.
#128151
base: main
Are you sure you want to change the base?
Conversation
The changes itself look good at first glance. On the other hand: if performance is really important, there there dedicated packages to calculate uuids (binding to rust or C) that are much faster. One more idea to improve performance: add a dedicated constructor that skips the checks. For example add to
Results in
(the |
I also thought about expanding the C interface for the module but it would have been too complex as a first iteration. As for third-party packages, I do know about them but there might be slightly differences in which methods they use for the UUID (and this could be a stop for existing code, namely switching to another implementation).
I also had this idea but haven't tested it as a first iteration. I wanted to get some feedback (I feel that performance gains are fine but OTOH, the code is a bit uglier =/) |
4f2744a
to
0710549
Compare
Ok the benchmarks are not always very stable but I do see improvements sith the dedicated constructor. I need to go now but I'll try to see which version is the best and the most stable. |
So, we're now stable and consistent:
Strictly speaking, the uuid1() benchmarks can be considered significant but only if you consider a 4% improvement as significant, which I did not. I only kept improvements over 10%. The last column is the same as the second one (PGO, no LTO) but using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice improvement overall! Personally I am not a fan of the lazy imports here, but I'll let someone else decide on that.
The entire module has been written so to reduce import times but I understand. I'll adress your comments tomorrow and will also check if I can remove some unnecessary micro optimizations. Thank you! |
In this commit, we move the rationale for using HACL*-based MD5 instead of its OpenSSL implementation from the code to this note. HACL*-based MD5 is 2x faster than its OpenSSL implementation for creating the hash object via `h = md5(..., usedforsecurity=False)` but `h.digest()` is slightly (yet noticeably) slower. Overall, HACL*-based MD5 still remains faster than its OpenSSL-based implementation, whence the choice of `_md5.md5` over `hashlib.md5`.
In this commit, we move the rationale for using OpenSSL-based SHA-1 instead of its HACL* implementation from the code to this note. HACL*-based SHA-1 is 2x faster than its OpenSSL implementation for creating the hash object via `h = sha1(..., usedforsecurity=False)` but `h.digest()` is almost 3x slower. Unlike HACL* MD5, HACL*-based SHA-1 is slower than its OpenSSL-based implementation, whence the choice of `hashlib.sha1` over `_sha1.sha1`.
Ok, here are the final benchmarks:
Agreed that this is slightly slower (roughly a constant 20 ns slower, which may be because I switched from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few small remarks, but am also ok with leaving the code as is. The PR improves performance for the main cases, does not make the other cases smaller and code complexity is roughly the same.
Not sure what happens but I'm seeing slow downs. How can I check that constant folding was done? EDIT: I'll regenerate the benchmarks to be sure. Wait a bit. |
Here are the final benchmarks:
We are indeed faster. Note that with a manual constant folding, I also have the same numbers (I just regenerated everything from scratch). I think sometimes we have noise. I'll update the NEWS as well to reflect the latest numbers. |
Replacing
On main it is:
With fallback MD5, we're a bit faster for small lengths:
I think I can live with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're always bundling
HACL*
MD5 implementation, I wondered whether we could just use it. Or do you think users would prefer if we usehashlib
explicitly (and OpenSSL when available)?
It's possible to build Python without HACL*, mainly for environments where cryptography is somehow regulated (i.e. the infamous FIPS mode).
It might be OK to use HACL*
by default in hashlib, possibly making it a bit faster for everyone (who didn't opt out).
But UUID should, IMO, use the default. We agree on that, hit the green button :)
There are some points that can be addressed:
We can drop some micro-optimizations to reduce the diff. Most of the time is taken by function calls and loading integers.
HACL* MD5 is faster than OpenSSL MD5 so it's better to use the former. However, usingFor consistency, we'll rely on OpenSSL-based implementation even if it's a bit slower._md5.md5
orfrom _md5 import md5
is a micro-optimization that can be dropped without affecting performances too much (see cff86e9 and 7095aa4)The rationale of expandingnot 0 <= x < 1 << 128
intox < 0 or x > 0xffff_ffff_ffff_ffff_ffff_ffff_ffff_ffff
is due to the non-equivalent bytecodes.Similar arguments apply to expanding
not 0 <= x < (1 << C)
intox < 0 or x > B
where B is the hardcoded hexadecimal value of(1 << C) - 1
.Bytecode comparisons (not useful, constant folding will make them similarly performant)
versus
uuid.*
functions #128150📚 Documentation preview 📚: https://cpython-previews--128151.org.readthedocs.build/