Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose healthcheck server (#643) #644

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mwasilew2
Copy link

This exposes a basic healtcheck server. Internally, the healtcheck handler doesn't do anything complex. The idea is to just check if the process is responding.

Closes #643

This exposes a basic healtcheck server. Internally, the healtcheck handler doesn't do anything complex.
The idea is to just check if the process is responding.

Signed-off-by: Michal Wasilewski <[email protected]>
@mwasilew2
Copy link
Author

Just trying to help here. Let me know if this needs tweaking.

@msakrejda
Copy link
Contributor

Thanks for the contribution, but I don't think this health check would have caught the issue fixed by #637: I believe the health check would have kept responding "okay", but the collector still would have been stuck. Do you see it differently?

@mwasilew2
Copy link
Author

@msakrejda

I don't think this health check would have caught the issue fixed by #637: I believe the health check would have kept responding "okay", but the collector still would have been stuck. Do you see it differently?

oh, didn't see #637

No, I agree, this healthcheck would most likely respond with "200 OK" even when hitting the bug fixed in #637.

On the other hand, I don't have the diagnostics necessary to know if our collector was unresponsive due to #637 , so it's hard to tell if a simple healthcheck wouldn't help. My intent was not to come up with a check that can cover all potential failure scenarios. What I was trying to achieve with this PR was to have the most simple check that would test if the process is able to respond, that's it. If it's not able to respond even to a healthcheck, automation (cloud provider) can try restarting it. If it is able to respond, the operator (me or my colleagues) can jump in and try to capture some diagnostics.

I'm also fine with closing this PR if you feel it doesn't bring any value :)

Addr: address,
}
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how hard it'd be to determine whether all configured servers have successful recent full snapshots - still not sure if that'd catch #637, but it would catch e.g. password errors or other problems with collecting the data and (some) errors when sending it.

We could potentially use server.PrevState.LastStatementStatsAt for this purpose? (https://github.com/pganalyze/collector/blob/main/state/state.go#L44)

If we do this, we should also make sure to not report failure at startup (where it can take up to 10 minutes to have the first full snapshot that sets server.PrevState)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intention was to configure this healthcheck in the ECS task definition or as a kubernetes liveness probe. The idea was that if the process is unresponsive, ECS or k8s would restart the process.

In that context, I'm not sure if an issue with one of the monitored DBs would be a good enough reason to kick the entire collector. Maybe the statuses of monitored DBs could be reported as a metric on the collector? As well as statuses of uploads? So in other words, collector could expose metrics for status of targets and status of uploads. If my understanding is correct, the later would cover #637 .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, perhaps I dismissed this idea prematurely. I think what Lukas proposed could work: we'd just need to make sure the health check has access to the server list. We could potentially expose configuration around that to have some flexibility over what constitutes a health check failure.

@mwasilew2 for the collector metric approach, how were you thinking of reporting these metrics? That seems different from a health check endpoint, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expose a health check endpoint
3 participants