-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Umbrella] RayClusterStatusConditions Tests Review #2645
Comments
Do we have any e2e tests for this feature? I'm guessing we never added it since our e2e tests don't make it easy to enable feature gates. But this should be simpler now that the feature gate is always enabled |
We don't have explicit e2e tests for this feature yet. We do have implicit ones which are e2e tests for RayService since we check the new status condition in kuberay/ray-operator/controllers/ray/rayservice_controller.go Lines 1123 to 1128 in 4021766
To test them explicitly, how about we add some assertions to these condition values at |
I think we could have some tests on the |
Continue on #2562: We have 5 status conditions on RayCluster CR which are gated by the
RayClusterStatusConditions
feature gate in v1.2.0. Now, we want to make them enable by default in v1.3.0. This issue is an umbrella for reviewing the tests for these status conditions.HeadPodReady and RayClusterProvisioned
In the envtest
kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Line 1069 in 3a62419
We make sure that the
HeadPodReady
condition is set when the Head Pod is ready.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 1111 to 1130 in 3a62419
We make sure that the
RayClusterProvisioned
condition is NOT set when Worker Pods are not ready but set after they are ready.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 1132 to 1162 in 3a62419
We make sure that the
RayClusterProvisioned
condition is STILL set after workers are down.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 1164 to 1183 in 3a62419
We make sure that the
RayClusterProvisioned
condition is STILL set but theHeadPodReady
condition is NOT set after Head Pod is dead.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 1185 to 1214 in 3a62419
ReplicaFailure
In the envtest, we make sure the ReplicaFailure condition is set when Pods can't be created.
kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 1217 to 1238 in 3a62419
In the unit test, we make sure the
calculateStatus
function will set theReplicaFailure
condition from thereconcileErr
.kuberay/ray-operator/controllers/ray/raycluster_controller_unit_test.go
Lines 1804 to 1807 in 3a62419
RayClusterSuspending
In the envtest
kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Line 714 in 3a62419
RayClusterSuspending
condition is set when.Spec.Suspend
is true.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 746 to 753 in 3a62419
RayClusterSuspending
condition is STILL set when.Spec.Suspend
turns to false before finishing suspension.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 755 to 761 in 3a62419
RayClusterSuspended
RayClusterSuspended
condition is set andRayClusterProvisioned
is unset when all Pods are gone.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 600 to 612 in 3a62419
RayClusterSuspended
condition is unset after resume the RayCluster.kuberay/ray-operator/controllers/ray/raycluster_controller_test.go
Lines 686 to 698 in 3a62419
The text was updated successfully, but these errors were encountered: