-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending messages is slow when using groups #83
Comments
I'll try to look into this in the next couple of weeks. |
Is Redis even involved in the third consumer? |
I'm also experiencing very slow behavior when using groups. |
Any update on this? Or do we have a solution what hasn't come up here? (We have this issue, but we don't use the latest versions. I would change to the latest/working version, if I could see that it's worth.) |
any update |
Many thanks to @andrewgodwin for all your hard work on this great project! Are there any updates regarding this bug? I am considering whether to use on the long term django channels on our site in production (we were hit by the same problem in testing), but I see that this bug has been open for over a year. |
These comments don't add anything. You can see there's no update, otherwise there'd be an update... 🙂 This needs someone experiencing the issue to spend the time to dig into it. What we don't have here is Why it's happening? Short of that, it'll just have to sit for a while. |
I have tried to re-test this if issue was still present (also checking some differences between different versions of channels-redis and channels). These are my results:
Each test was performed with 200 requests to same echo-consumer. Some good work was done, but I belive that is bad to slow down by an order of magnitude with a single group. However for the sake of completeness I will try a more complex test with more than 1 group to see if there are drawbacks or not. |
Hi @sevdog. Thanks. If you can work out where the time is being spent, that should help... 🙂 |
I had run a test logging times inside
I have calculated elapsed time since function get initially called. These are times stats:
From these data seems that This was still tested with just 1 group. I will do some other checks with 2 and 10 groups as soon as I can. |
I have to apologize for my previous statement. Seems that what takes about channels_redis/channels_redis/core.py Lines 614 to 622 in 0c27648
I have also performed an other test with 3 different clients which connects to my 3 test consumers, these are times:
I will try to add more automation to my actual tests to have a better poit-of-view fo how the situation evolves with more clients (maybe with a graph or something similar). |
This comment was marked as spam.
This comment was marked as spam.
I have done some other tests with a growing number of client connecting to these test consumers (from 1, 6, 11, 16, 21, 26 and 31) having these results: I dubt that this ha something goog, since I also need a comparison with channels v1, since this should be tested on a multi-host enviroment. My tests were performed on my machine with docker, but they should be done on a semi-real environmet (server and clients separeted). Also a better way to collect time-related data should be used (instead of print/log). |
This comment was marked as spam.
This comment was marked as spam.
I do not have tested against previous version (channels v1.x aesgi-redis 1.x), so this is not a full comparison between major versions. Also, as I stated before, an accurate test requires different hosts for clients and server. This requires more time (and money). Moreover my tests were only performed against Python 3.6, it should also be tested against other versions because this may have some kind of reletion with asyncio handling. |
Is there a way to send directly between consumers in the meantime? It seems like calling
is the current problem, but what about:
I'll try and test this soon, however managing groups ourselves seems like a pain. EDIT: reading docs, it says that you could maintain the groups yourself. I'll have a look at just using channel_layer.send() vs channel_layer.group_send() with 1-3 consumers |
Running a quick test, and tested: I got 1ms and 2ms respectively. What does Is the goal to get inter process communication to be as fast as websocket? That makes sense on some level; but what portion of python sending it to pickling, sending to redis, then receiving from redis and unpickling is the bottleneck? |
#223 was related. (Speed up reported switching redis connection to unix sockets.) |
Channels appears to create a number of lua scripts that get executed. Is it worth considering "pre-loading" those scripts into redis at start-up and then just call them during processing? I'm wondering whether or not some of these delays are caused by the scripts being loaded and parsed before being executed. (Unfortunately, I don't know of any way of measuring that.) |
Hey @KenWhitesell. 👋 I don't think that's the issue. They're cached on first load. 572aa94 was added to make the most of this. |
I just used Using 32 clients (which were processes on an other host) I got the following results:
I then tried to figure out how the internal code of the method could be improved and got interested in this piece of code: channels_redis/channels_redis/core.py Lines 665 to 668 in 243eb7e
Since these tasks does not have any relation which prevents concurrent execution this can be improved via a divide-et-impera using await asyncio.gather(*[
connection.zremrangebyscore(
key, min=0, max=int(time.time()) - int(self.expiry)
) for key in channel_redis_keys
]) Which gives the following time results:
Within my test this has reduced both overall and internal time. |
Hey sevdog, thanks for the update. Once Django 3.2a1 is out the way I will be swinging back to Channels so will take a look at this. 👍 |
This comment was marked as spam.
This comment was marked as spam.
@carltongibson Are there any alternatives? |
Hi both. No, this is still pending investigation. I've not had bandwidth as yet. If you'd like to dig in that would be super! |
I have a doubt @carltongibson , |
@aryan340 without testing there's no real way to say. I would suspect though that at scale you'd want to be looking into more fully featured and battle tested message brokers. There's a lot in this space that's simply out of scope for us here. |
For those following this issue, it might be interesting for you to know there is a new channel layer implementation which is a lightweight layer on top of Redis Pub/Sub. Redis Pub/Sub is efficient & fast. This new channel layer thinly wraps it so we can borrow its speed/efficiency. If you run a ton of messages through your app's channel layer (particularly if using a lot of It's called Hope it helps someone! Disclaimer: I started the |
It seems that
groups
slow down sending messages ~100 times:I've created 3 different echo consumers in order to compare them:
I got these timing results:
Here is the consumers code:
The text was updated successfully, but these errors were encountered: