-
Notifications
You must be signed in to change notification settings - Fork 441
[Redis] messages lost and non-atomic operations #750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
That should be addressed indeed, not sure what would be the best approach to fix it though. |
Atomic ones are probably harder to implement. What about setting add operations before the remove ones? There may be duplicated messages but still better than message lost. |
To cope with duplicated issue, can keep a hash set of uuids and check against it? The hash set can be deleted when queue becomes empty. |
If you need strict message delivery guarantee than I'd suggest you to use a real broker such as RabbitMQ or AmazonSQS. We've investigated the issue at our end looking for a solution to fix it. Unfortunately, we've not come with any better approach. The current one is not ideal but it remains the best one |
Uh oh!
There was an error while loading. Please reload this page.
During our usage of Enqueue Redis on a possibly unstable connections with lots of retries, we occurred ~3-5 message lost out of ~50000 messages.
Having looked into source the receiveMessage/processResult acknowledge/requeue are indeed non-atomic operations like brpop + zadd, zrem + lpush etc.
Do you think if implementations of atomic operations, like RPOPLPUSH, BRPOPLPUSH, or any kind of multi or lua scripts would be feasible?
The text was updated successfully, but these errors were encountered: