-
Notifications
You must be signed in to change notification settings - Fork 534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create api-availability measurement #1096
Comments
@mm4tt: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
Great, thanks @vamossagar12 Let's start with the V0. Could you let me know which part of the description is not clear or requires further details? |
thanks @mm4tt Couple of things:
As an outside question, typically if the health end point is down for a configured time, response would be to scale up or something like that. Just wanted to know the rationale behind adding this particular measurement. Thanks! |
Seems like implementation detail. We can make it configurable, but I doubt we will be using different values once we agree on sth.
Correct
There are couple points:
|
Thanks for the updates @wojtek-t . I will start with this issue now. Pretty sure there would be a few more questions along the way though :) |
hi. Since I last commented, I didn't get a chance to look at it. Will start over the next couple of days.. |
hi so i started looking at this today. One of the things is that there's already a measurement metrics_for_e2e which, among other things, also fetches the metrics from api-server. So, the new measurement that needs to be created, can work along the same lines just that instead of hitting the /metrics end-point, it can hit the /healthz end-point of api-server. Thats' from an implementation standpoint. The other question that I had is that the measurements that we defines are within a Step and a group of Steps forms a single test, so when we say that this new measurement measures the health of the api-server for the duration of a test(mentioned above), so what exactly does a test mean in this case? A Step which houses the measurement(s) or the overall test? |
Hey, @vamossagar12 It makes sense to make the implementation similar to the metrics_for_e2e. It's exactly as you said, instead of hitting /metrics, we'll be hitting /healthz. Regarding your second question. Usually a measurement has two actions: |
Not sure I understand - for metrics_for_e2e IIRC we're fetching the metrics once at the end of test. Assuming that the above is true - it can't really work the same way... |
Good point, I should have checked how the metrics_for_e2e works :) Still you should be able to take some inspiration from that measurement. Let me know if you have more questions. |
Actually I meant to use the metrics_e2e as only a baseline of how to interact with the api server. I hadn't looked at the internals so what @wojtek-t told is even more valuable :) |
Regarding point 2, I still have a question. As far as what I have understood, the hierarchy of a test is as follows: Test -> Step -> Phase(s) Or measurement(s) If we just focus on measurements, it's 2 levels lower than a test. So, when we hit the api-server from start and gather from within a measurement, that would be still within the context of that particular step. Is the question clear now? Or is my thinking totally off track here 😄 |
Oh, I see where the confusion comes from. The hierarchy you listed is correct. But the measurement there should be treated as measurement invocation. The measurement object life spans the whole test. So if your test is
Then methods start and gather will be called on the same measurement instance |
@mm4tt I have taken an initial stab at this here: https://github.com/kubernetes/perf-tests/pull/1162/files Plz review when you have the bandwidth. I would also put more thoughts on improving it. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@wojtek-t The PR got merged. I guess the next task would be to write the config files? Any specific areas for me to start looking at for that? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
We're in the middle of V1. We have already added a configurable availability percentage threshold under which the test would fail. The problem is that the API availability measurement makes the API call latency measurement fail. This is because the former runs |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle rotten |
Justification
See this comment - #1086 (comment)
Milestones
V0
V1
V2
/good-first-issue
The text was updated successfully, but these errors were encountered: