Friday, June 26, 2020

Re: Starving queue with django-celery and rabbitmq

someone with a charitable soul could help?

Rogério Carrasqueira



Em qui., 25 de jun. de 2020 às 17:12, Rogerio Carrasqueira
<rogerio.carrasqueira@gmail.com> escreveu:
>
> Hello guys!
>
> I think it's kind of off-topic, because it's basically I think more
> from RabbitMQ than from Django itself, but I believe someone has
> already may have gone through with a similar situation.
>
> I have a Django application that runs with celery 3.1 and rabbitmq
> 3.8.2, It uses rabbitmq to distribute messages among a series of
> workers who do several tasks.
>
> It happens that at on a given moment when a quantity of tasks enters
> very large in a given queue, it seems to me that that queue wants to
> take all workers for itself. It is as if this task gained an absurd
> priority in the system and the celery determined that this queue got
> all the attention in the world and the workers worked just for it.
>
> Faced with this scenario, absurd situations start: which I have a
> worker who works in queue1, a worker b who works in queue 2, and I
> have configured in the celery routes task x should be allocated in the
> queue for worker b to execute. Entering an absurd amount of messages
> in queue 1, worker a and b starts to working only at queue1, and the
> queue2 is put aside. Only when giving an execution error of worker b,
> the task that was in queue 1 is allocated in queue 2, occurring hell
> in the system, leaving a mass about the organization of queues,
> causing a series of bottlenecks in the system.
>
> So, I ask friends for a light and put it down as I am configuring my
> celery settings:
>
> BROKER_TRANSPORT_OPTIONS = {}
>
> CELERY_IMPORTS = ("core.app1.tasks", "core.app2.tasks")
> CELERYD_TASK_TIME_LIMIT = 7200
>
> CELERY_ACKS_LATE = True
> CELERYD_PREFETCH_MULTIPLIER = 1
> #CELERYD_MAX_TASKS_PER_CHILD = 1
>
> CELERY_TIMEZONE = 'America/Sao_Paulo'
> CELERY_ENABLE_UTC = True
>
> CELERYBEAT_SCHEDULE = {
> 'task1': {
> 'task': 'tasks.task1',
> 'schedule': crontab(minute='*/30'),
> },
> 'task2': {
> 'task': 'tasks.task2',
> 'schedule': crontab(minute='*/30'),
> },
> }
>
> CELERY_RESULT_BACKEND =
> "redis://server-1.klglqr.ng.0001.use2.cache.amazonaws.com:6379/1"
> CELERYBEAT_SCHEDULER = 'redbeat.RedBeatScheduler'
> CELERYBEAT_MAX_LOOP_INTERVAL = 5
>
> CELERY_DEFAULT_QUEUE = 'production-celery'
> CELERY_SEND_TASK_ERROR_EMAILS = False
>
> CELERY_ROUTES = {
>
> 'tasks.task_1': {'queue': 'queue1'},
> 'tasks.task_2': {'queue': 'queue2},
>
> }
>
> Supervisor settings:
>
> [program:app_core_production_celeryd_worker_a]
>
> command=/usr/bin/python manage.py celery worker -n worker_a%%h -l INFO
> -c 30 -Q fila1 -O fair --without-heartbeat --without-mingle
> --without-gossip --autoreload --settings=core.settings.production
> directory=/home/user/production/web_app/app
> user=user
> numprocs=1
> stdout_logfile=/home/user/production/logs/celeryd_worker_a.log
> stderr_logfile=/home/user/production/logs/celeryd_worker_a.log
> autostart=true
> autorestart=true
> startsecs=10
> stdout_logfile_maxbytes=5MB
>
> [program:app_core_production_celeryd_worker_b]
>
> command=/usr/bin/python manage.py celery worker -n worker_b%%h -l INFO
> -c 30 -Q fila2 -O fair --without-heartbeat --without-mingle
> --without-gossip --autoreload --settings=core.settings.production
> directory=/home/user/production/web_app/app
> user=user
> numprocs=1
> stdout_logfile=/home/user/production/logs/celeryd_worker_b.log
> stderr_logfile=/home/user/production/logs/celeryd_worker_b.log
> autostart=true
> autorestart=true
> startsecs=10
> stdout_logfile_maxbytes=5MB
>
> Thanks so much for your help!
>
> Rogério Carrasqueira

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CACX1ULQrrKWFivEZT%3DUPame46tMFFKM-E7z6nhXrdhUgE%2B7hfg%40mail.gmail.com.

No comments:

Post a Comment