RabbitMQ channel dying while performing computationally intensive job

ساخت وبلاگ

Vote count: 0

I am very new to RabbitMQ so my question might be a bit naive and hopefully easily resolved.

I want to dispatch some heavy computational tasks to other machines. To that end I thought I could use pika and RabbitMQ. My setup is as follows:

  • The a RabbitMQ server is running in the official docker image on one machine.
  • Other machines are running docker container with my libraries and pika.

For sending and consuming the jobs I basically use the setup described in the second tutorial which uses pika.BlockingConnection and pika.BlockingConnection along with channel.basic_ack(delivery_tag = method.delivery_tag) once the job is computed.

Generally, the setup works fine and the jobs are started. However, while the job is running, the consumer disappears from the queue for reasons that I don't understand. This causes RabbitMQ to send the same job to another consumer. Needless to say that this is unwanted behaviour.

Now I have several questions:

  • Does anyone have an idea why the consumers disappear? They run python code which calls another C-library. How could I debug what causes the consumers to disappear?

  • Maybe my RabbitMQ/pika setup is not ideal for my goal. Does anyone have a better suggestion that gives me an overview how many consumers are running, potentially ping the jobs while they are computing, and ideally determine whether the execution of a particular job resulted in an error?

Thanks for all suggestions.

asked 2 mins ago

- - , .

back soft...
ما را در سایت back soft دنبال می کنید

برچسب : نویسنده : استخدام کار backsoft بازدید : 272 تاريخ : جمعه 7 اسفند 1394 ساعت: 8:50