-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service Bus Queue Health Check Failing Intermittently with Unauthorized Exception #1735
Comments
See #1724 (comment) |
@marioleed I don't think it's the same issue. If there was something wrong with the config (using peek mode or lack of sufficient permissions for the health check), it would fail in all the pods, all the time. But we see it happen randomly in just one of the pods. |
Yeah, it does feel strange that it's intermittent. But the error message seems to be connected to the issue in 6.1.0. What role are you using for the service bus que (what claims does it have)? Can you make sure the config is correct on all pods? Can you log the health checks service bus version? I want rule out that the pods are not running different versions. |
We are using Azure Service Bus Data Owner role. The pods all share the same deployment template, so AKS specific config and the container image they are using are the same (so same health check and service bus versions). Is there anything specific you want me to check? I am wondering if it could have something to do with auth tokens expiring/not refreshing correctly. |
@rithvikp1998 I'm not sure, but shouldn't you get some other error message if it were expired? For the pods that are working I would like to confirm that they indeed are running 6.1.0 and not < 6.1.0. |
@rithvikp1998 Feel free to reopen if you still have issues. Closing for now due to inactivity. |
Please, fill the following sections to help us fix the issue
What happened:
We are using the package to do a health check on our service bus queue. We use AKS to run our service and use managed identity to access service bus. The health checks are running fine and returning healthy most of the time but every once in a while, they start failing in one particular pod, and keep failing until the pod gets refreshed. During this time, the pod is still able to access the same service bus queue and process messages using the service bus SDK, it is only the health check call that is failing. They share the same credential object, so it is not an issue with that either. It appears to be a bug in health check itself. We want some help to deep dive and fix this bug.
The error message is:
Unauthorized access. 'Listen' claim(s) are required to perform this operation. Resource:...
What you expected to happen:
Service bus queue health check should not fail when the pod can still access the same queue through the service bus SDK
How to reproduce it (as minimally and precisely as possible):
Since the issue is happening randomly, I don't know how we can reproduce it.
Source code sample:
Anything else we need to know?:
Environment:
The text was updated successfully, but these errors were encountered: