Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the SPC ready status reflect its capacity #1108

Closed

Conversation

metlos
Copy link
Contributor

@metlos metlos commented Nov 29, 2024

Make the SPC ready status reflect the ability to place spaces to the corresponding cluster. The capacity manager is simplified to take this fact into account, even though it needs to re-check the spacecount from the cache to decrease the chance of placing spaces to full clusters just because of the fact that the reconciliation of the SPC didn't happen yet.

At one point during the implementation, I based the capacity computation completely on the capacity info contained in the SPC. After that, it became apparent that that will not work, because it would increase the chance of over-committing spaces to the clusters because of the delay between the increased space count and its manifestation in the SPC's status.

Therefore, the capacity manager has been updated to take into account the "fresh" space count from the cache.

Another consequence of this "retrofit" is the fact that many of the tests were simplified to not mock the ToolchainStatus and instead completely rely on the info in the SPCs. This doesn't correspond fully to the runtime state but IMHO is sufficient in the involved unit tests because the unit tests merely use this information, they're not testing it.

Paired PRs:

the corresponding cluster. The capacity manager is simplified to take this
fact into account, even though it needs to re-check the spacecount from
the cache to decrease the chance of placing spaces to full clusters just
because of the fact that the reconciliation of the SPC didn't happen yet.
Copy link

openshift-ci bot commented Nov 29, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: metlos
Once this PR has been reviewed and has the lgtm label, please assign alexeykazakov for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@mfrancisc mfrancisc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice Job 👍
I like the new test simplification without the InitializeCounters requirement.

I have few comments/questions , mostly related to the unit tests.

pkg/capacity/manager.go Outdated Show resolved Hide resolved
pkg/capacity/manager.go Outdated Show resolved Hide resolved
pkg/capacity/manager.go Show resolved Hide resolved
pkg/capacity/manager_test.go Show resolved Hide resolved
pkg/capacity/manager.go Outdated Show resolved Hide resolved
controllers/spaceprovisionerconfig/mapper.go Show resolved Hide resolved
@metlos
Copy link
Contributor Author

metlos commented Dec 4, 2024

/retest

}
spc.Status.ConsumedCapacity = cc

capacityCondition := r.determineCapacityReadyState(spc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we have a message or different reasons in order to be able to spot which threshold exceeded ? I mean the SpaceProvisionerConfigInsufficientCapacityReason alone doesn't say if memory usage is higher then threshold or space count exceeded the max value configured. I think it could be useful to spot that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If both threshold are exceeded we could have that information in the message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered that but since both the limits and the usage are in the same CR, it's a matter of just glancing at the data to see what's wrong. IMHO, if we wanted to be this detailed, we should add separate conditions for different aspects of the readiness computation.

So far, a single condition was enough for us, so I didn't want to overdo the error reporting when all the information is already in the CR.

pkg/capacity/manager.go Outdated Show resolved Hide resolved
Copy link

codecov bot commented Dec 5, 2024

Codecov Report

Attention: Patch coverage is 86.72199% with 32 lines in your changes missing coverage. Please review.

Project coverage is 79.50%. Comparing base (e809054) to head (136c8ed).

Files with missing lines Patch % Lines
...isionerconfig/spaceprovisionerconfig_controller.go 83.07% 20 Missing and 2 partials ⚠️
pkg/capacity/manager.go 90.27% 5 Missing and 2 partials ⚠️
main.go 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1108      +/-   ##
==========================================
+ Coverage   79.44%   79.50%   +0.05%     
==========================================
  Files          78       78              
  Lines        7785     7924     +139     
==========================================
+ Hits         6185     6300     +115     
- Misses       1422     1442      +20     
- Partials      178      182       +4     
Files with missing lines Coverage Δ
...ers/spacecompletion/space_completion_controller.go 85.48% <100.00%> (-0.24%) ⬇️
controllers/spaceprovisionerconfig/mapper.go 100.00% <100.00%> (ø)
controllers/usersignup/approval.go 100.00% <100.00%> (ø)
test/spaceprovisionerconfig/util.go 100.00% <100.00%> (ø)
main.go 0.00% <0.00%> (ø)
pkg/capacity/manager.go 94.83% <90.27%> (-3.65%) ⬇️
...isionerconfig/spaceprovisionerconfig_controller.go 80.95% <83.07%> (-0.13%) ⬇️

controllers/spaceprovisionerconfig/mapper_test.go Outdated Show resolved Hide resolved
controllers/spaceprovisionerconfig/mapper_test.go Outdated Show resolved Hide resolved
controllers/usersignup/approval_test.go Outdated Show resolved Hide resolved
Comment on lines 84 to 92
func DefaultClusterManager(namespace string, cl runtimeclient.Client) *ClusterManager {
return NewClusterManager(namespace, cl, nil)
}

func NewClusterManager(namespace string, cl runtimeclient.Client, getSpaceCount SpaceCountGetter) *ClusterManager {
return &ClusterManager{
namespace: namespace,
client: cl,
namespace: namespace,
client: cl,
getSpaceCount: getSpaceCount,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we discussed that it would make more sense having the changes of the capacity manager in a separate PR, that it would help us with promoting the changes in an isolated way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, true. It's been so long that I completely forgot about this.

pkg/capacity/manager.go Outdated Show resolved Hide resolved
pkg/capacity/manager.go Outdated Show resolved Hide resolved
Copy link

openshift-ci bot commented Dec 6, 2024

@metlos: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e 845ea69 link true /test e2e

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@metlos
Copy link
Contributor Author

metlos commented Dec 6, 2024

I forgot that this needs to be done in stages to avoid the possibility of making the clusters unavailable for space placement for a short period of time.

I'm closing this PR in favor of #1109 which puts the consumed capacity info into SPCs and will be followed up by another PR that will contain the changes in the capacity manager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants