You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And then it ignores all serviceMonitors, even if they have the label to match.
I also tried different label matchers, and no luck. I'm trying to understand what might I see in the logs that could help but all I see is this:
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Starting the Target Allocator"}
{"level":"info","ts":"2024-10-23T10:29:48Z","logger":"setup","msg":"Prometheus config empty, skipping initial discovery configuration"}
{"level":"info","ts":"2024-10-23T10:29:48Z","logger":"allocator","msg":"Starting server..."}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Waiting for caches to sync for namespace"}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Caches are synced for namespace"}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Waiting for caches to sync for podmonitors"}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Caches are synced for podmonitors"}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Waiting for caches to sync for servicemonitors"}
{"level":"info","ts":"2024-10-23T10:29:48Z","msg":"Caches are synced for servicemonitors"}
{"level":"info","ts":"2024-10-23T10:43:38Z","logger":"allocator","msg":"Could not assign targets for some jobs","allocator":"per-node","targets":4,"error":"could not find collector for node ip-10-122-54-130.eu-central-1.compute.internal\ncould not find collector for node ip-10-122-54-130.eu-central-1.compute.internal\ncould not find collector for node ip-10-122-54-130.eu-central-1.compute.internal\ncould not find collector for node ip-10-122-54-130.eu-central-1.compute.internal"}
Anything that would shed light on the issue or any explanation of how to correctly setup target allocator to have it only
collect metrics from selected sources would be highly appreciated.
Thank you
The text was updated successfully, but these errors were encountered:
The error in your log means that you don't have collector Pods on Nodes where Pods selected by your monitors are running. This may be related to the issue you're experiencing, but given that you're seeing no data whatsoever, this probably isn't the case. Are you sure the labels are correct? Can you also post the output of the /scrape_configs endpoint on the target allocator?
Sorry for the late reply. I found the solution, and it's a bit odd.
I edit the target allocator config map and replaced the string matchLabels with matchlabels (lower case l) and after restarting the target allocator I queried the /jobs endpoint and saw only the jobs that matched the labels.
Seems that the matchlabels string has to be all lower case and no spaces. I looked a bit in the code and couldn't find why and whether that's intentional. Does this make sense?
It is intentional, though it looks like we'll soon accept both casings thanks to work in #3350. The examples you posted had it all lower case though, which should work.
Component(s)
target allocator
Describe the issue you're reporting
I have the target allocator set up with the following config:
This setup ignores the selector and I'm seeing the collector is scraping all the serviceMonitor resources.
I also tried adding
matchlabels
since I found an example in the one of the test files:And then it ignores all serviceMonitors, even if they have the label to match.
I also tried different label matchers, and no luck. I'm trying to understand what might I see in the logs that could help but all I see is this:
Anything that would shed light on the issue or any explanation of how to correctly setup target allocator to have it only
collect metrics from selected sources would be highly appreciated.
Thank you
The text was updated successfully, but these errors were encountered: