It looks like accessing scope for url kwargs is a big hit on performance. In fact hitting scope for anything seems to introduce some form of delay.
-- I'm getting very mixed results so I'm unsure if this is now an issue with redis. It is definitely the python process using 100% of the CPU though.
On Tuesday, 27 March 2018 09:51:44 UTC+1, James Foley wrote:
On Tuesday, 27 March 2018 09:51:44 UTC+1, James Foley wrote:
Apologies for the double post, I've removed the other one.PYTHONASYNCIODEBUG doesn't appear to give me any warnings.This test I am running is only between two users, each user belonging to the same group. All messages received are pushed back out to all users and filtered clientside.Assuming I use 'channels.layers.InMemoryChannelLayer' as my backend for the memory layer, none of my websocket connections get past HANDSHAKING so I'm not exactly sure whats up there.
On Monday, 26 March 2018 17:35:34 UTC+1, Andrew Godwin wrote:(You double-posted this so I'm just going to reply to this one)I need to know a bit more information about what the slowdown is - in particular:* Have you run with PYTHONASYNCIODEBUG=1 set as an environment variable to check for non-yielding coroutines?* What sort of messages per second are we talking about? You say every 20ms, but how many listening on the group (so what does it multiply out to?)* Do you get the same CPU usage issues if you try using the in-memory channel layer rather than the Redis one?The last point would be especially interesting to check, as the node relay server you built does not, I imagine, use Redis as a cross-server transport in the middle and so that would be the major difference. 200 requests a second should still be fine, though, so the others are worth knowing about as well.One further thing you could do is build a simple echo server with no use of groups or the channel layer at all and see how that performs, to narrow down where the performance issue lies.AndrewOn Mon, Mar 26, 2018 at 6:15 AM, James <jamesrich...@gmail.com> wrote:I'm using Channels 2 to build a shared 3D model viewing tool, but I'm running into performance issues where Channels can't keep up and uses 100% of a single core. This results in clients just receiving a slow trickle of messages rather than the fast stream I was expecting.--I ended up stripping my consumer all the way back to basically a relay server, so a client sends a message, server broadcasts the exact data received to a group. I am sending a lot of data though (a message every 20ms or so) so not sure if that is causing my issues.Using node I built the same relay server and I have zero issues with speed or server performance.Is this down to how many workers I have running vs the data I'm sending, or perhaps the overhead of Django?
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com .
To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users .
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/fecbabcc- .2692-4d7f-ac6f-6de7c3631354% 40googlegroups.com
For more options, visit https://groups.google.com/d/optout .
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/a18867f0-23b2-49fc-a2f3-becc88971052%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment