Hi Luis,
If you are getting ChannelFull exceptions under load it means that the channels are not being drained fast enough, which means you need more worker processes.
This means you should run more worker instances. If you are using Docker to run worker instances, you would run multiple copies of the same docker container - all the workers you run will connect to the same Redis server and be able to drain the channels faster.
Andrew
On Wed, Apr 26, 2017 at 9:13 PM, Luís Antonio De Marchi <luis@snowmanlabs.com> wrote:
--First I need to ask for patience with my English, the translator is helping.We are creating an project with forecasts of millions of connections per second. We've never worked with websocket before, I heard that Crossbar.io was better. But I've been playing with Django Channels for some time and I love it.We are 60% of the project with Django Channels and I discovered a stress test tool, which is called "tsung". With this test "I was able" to reach the mentioned message.I also heard that Django Channels with Docker is fully scalable, but how do you actually scale it?Sorry for my ignorance, I'm very worried that the project is a failure in start. It may never reach the expected numbers, but the system will appear on the television network and we have chances that it will actually happen (at least a few minutes)
Em sexta-feira, 2 de dezembro de 2016 02:06:24 UTC-2, Andrew Godwin escreveu:On Thu, Dec 1, 2016 at 1:03 PM, Hank Sims <hank...@gmail.com> wrote:You set it in the channel layer configuration in Django, like this: https://github.com/django/asgi_redis/#usage Ah, thank you. Sorry I missed that.How would you propose this worked? The only alternative to closing the socket is to buffer the messages in memory and retry sending them, at which point you might have the case where the client thinks they have a working connection but it's not actually delivered anything for 30 seconds. Hard failure is preferable in distributed systems in my experience; trying to solve the problem with soft failure and retry just makes problems even more difficult to detect and debug.I guess the "hard failure" I would prefer in this case -- though maybe not all cases -- is simply discarding new outbound messages when their queue is full. Or else some sort of mechanism from within my consumers.py that would allow me to forgo writing to a channel if its queue is full.You already get this - trying to send to an outbound channel when it is full will raise the ChannelFull exception. What you're seeing is the inbound channel filling up, and the ASGI spec says that websocket protocol servers should drop the connection if they can't send an incoming message from a socket.Andrew
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com .
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users .
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/33b64948- .d47a-459d-9f48-3c6459f15821% 40googlegroups.com
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CAFwN1uqxQYuQNctyAUj64v_FcQtBGr898ffAAvu9S0EcK-AejA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment