Query Regarding Celery Worker Configuration #8942
Unanswered
akashchevli
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm currently working with Celery in my Django project and have encountered a specific question regarding Celery worker configuration.
I'm running multiple Celery instances on the same machine for different environments (development, staging, production). Each instance is configured with its own queue name using the @shared_task(queue=CELERY_NAMESPACE) decorator.
However, I've noticed some unexpected behaviour in the logs of the Celery workers. Despite using separate queue names for each environment and configuring them accordingly, I'm seeing logs in the wrong environment's worker.
For example, even though I'm running separate workers for development (Queue name: CELERY_DEV) and production (Queue name: CELERY_PRO), I'm observing logs meant for production tasks in the development worker's logs.
Moreover, I have noticed that the function responsible for executing a specific task is only completing half of its execution, and the remaining code does not execute (There are no errors in the code). This behaviour is quite perplexing to me.
To provide more context, here are the commands I am using to start the Celery workers:
Dev Celery Worker:
celery -A blog worker -B --loglevel=DEBUG -c 1 -n blog-celery-dev --queues CELERY_DEV
Production Celery Worker:
celery -A blog worker -B --loglevel=DEBUG -c 1 -n blog-celery-pro --queues CELERY_PRO
Despite my efforts to investigate, I have not been able to pinpoint the root cause of these issues. I have checked for shared resources and ensured the correctness of the configuration, but the problems persist.
I would greatly appreciate any guidance, suggestions, or insights you can provide to help me resolve these issues.
Thank you very much for your time and assistance.
Beta Was this translation helpful? Give feedback.
All reactions