-
Notifications
You must be signed in to change notification settings - Fork 513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Bloom Filter implementation for Sparse Joins #1686
Comments
@anish749 interesting stuff! I'll have a look at this and I'll come back to you. |
I've worked on this a bit. I did some benchmarks on Algebird Bloom Filters, we can optimise the implementation, and improve wall time. I'll post something soon.. |
Nice! That's awesome! To be honest i haven't had proper time to have a look at this! Maybe today... |
Here is a PR in Algebird with an implementation which would be able to match with no breaking changes in any API. twitter/algebird#673 |
Our benchmarks have shown improvements around 30% 😄 |
That's awesome.. I'll try this out tomorrow 👍 |
I tried this out, but I'm unable to replicate this. It is a bit difficult to get snapshot versions and try out without releasing the internal mono repo. I was in dependency hell for quite sometime. Do you have any canary to test directly? I did rename some functions, copy these classes and sort of hacked my way around to execute this optimisation, but it looks so dirty that I suspect I am introducing bugs!! |
There is another option which is to use Guava which offers a BloomFilter with similar performance, however we would need to add another implicit evidence similar to Hash128 for |
Closed via #1806 |
In the current version of Sparse Joins and lookups, we are using Algebird's Bloom Filters Aggregators.
This works well and is very easy and elegant to express in Scala, but its not performant. It takes a long time and CPU to build this filter (~3 hrs wall time for 3M entries). For smaller datasets (~5M entries) I found a simple HashSet filter to work better, though that starts to go near the limits of SideInput size. For entries around 50M, it starts taking longer than a shuffle join.
The Bloom Filters from Breeze as well as ClearSpring Stream lib however gets constructed within 3 to 4 min wall time for 3M entries.
Suggestion 1:
I feel using Stream Lib from AddThis / ClearSpring instead of Algebird BloomFilter would allow us to optimise a lot more jobs. I am not sure why, but I was having a very high false positives with Breeze BFs (may be I was making a mistake, as it seems weird)
Breaking API changes(?)
Algebird provides us with a Hash128[K], however the BF in StreamLib takes a byte array / String, which means we would need to add an implicit ByteArrayEncoder for primitive types which are present in algebird's Hash128 to make sure we are not breaking any API.
Can we go ahead with adding this additional implicit, and move away from Algebird's BF for sparse functions?
Also because this adds an additional dependency, should this be a part of
scio-extra
which would again means removing those methods fromscio-core
thus breaking again? I am a bit confused about adding an extra dependency.I have some code in an internal repository using StreamLib and here are some benchmarks. Based on suggestions on the above, I can create a PR fixing this.
Here is a Dataflow join of ~4M with two datasets of 1B each.
Using Algebird Bloom Filters
Using ClearSpring / AddThis BloomFilters
Suggestion 2:
Optimise Algebird Bloom Filters, and make it run faster. The Bloom Filter in Algebird is running very slow, but it can surely be fixed to make this work faster.
The text was updated successfully, but these errors were encountered: