Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-48250: MCO CO degrades are stuck on until master pool updates complete #4791

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

djoshy
Copy link
Contributor

@djoshy djoshy commented Jan 14, 2025

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Jan 14, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-48250, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

openshift-ci bot commented Jan 14, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 14, 2025
@djoshy
Copy link
Contributor Author

djoshy commented Jan 15, 2025

/retest-required

@djoshy
Copy link
Contributor Author

djoshy commented Jan 15, 2025

/test security

Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

@djoshy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-vsphere-ovn-upi 07e371a link false /test e2e-vsphere-ovn-upi
ci/prow/e2e-gcp-op-single-node 07e371a link true /test e2e-gcp-op-single-node

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@sergiordlr
Copy link

Verified using IPI on AWS

  1. Scale down CVO
    $ oc scale deployment cluster-version-operator --replicas 0 -n openshift-cluster-version

  2. Remove the configuration of the machine-config-operator-images configmap to force a degraded status in the machine-config clusteroperator

# Don't forget the get the original configuration, we will restore it later
$ oc get cm machine-config-operator-images -oyaml
$ oc set data cm machine-config-operator-images 'images.json='

  1. Wait until the machine-config CO is degraded
$ oc get co machine-config
NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
machine-config   4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest   True        False         True       120m    Failed to resync 4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest because: could not parse images.json bytes: unexpected end of JSON input
  1. Apply a mc to the master pool
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: test-mc-master
spec:
  config:
    ignition:
      version: 3.1.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,dGVzdA==
        filesystem: root
        mode: 420
        path: /etc/test-file-0.test
  1. Wait for the master pool to start updating
$ oc get mcp,nodes; oc get co machine-config
NAME                                                         CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
machineconfigpool.machineconfiguration.openshift.io/master   rendered-master-55d74396e13f62dc07bc3b38e3ed2aa3   False     True       False      3              0                   0                     0                      128m
machineconfigpool.machineconfiguration.openshift.io/worker   rendered-worker-639975f0322cd83401ab1d9a80e6ae46   True      False      False      3              3                   3                     0                      128m

NAME                                             STATUS                     ROLES                  AGE    VERSION
node/ip-10-0-11-207.us-east-2.compute.internal   Ready                      worker                 125m   v1.31.3
node/ip-10-0-21-105.us-east-2.compute.internal   Ready,SchedulingDisabled   control-plane,master   130m   v1.31.3
node/ip-10-0-34-100.us-east-2.compute.internal   Ready                      control-plane,master   130m   v1.31.3
node/ip-10-0-51-58.us-east-2.compute.internal    Ready                      worker                 125m   v1.31.3
node/ip-10-0-64-184.us-east-2.compute.internal   Ready                      worker                 125m   v1.31.3
node/ip-10-0-84-199.us-east-2.compute.internal   Ready                      control-plane,master   130m   v1.31.3
  1. Fix the machine-config-operator-images configmap using the original configuration that we got in step 2
  2. After a few seconds the machine-config CO should stop being degraded
$ oc get co machine-config
NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
machine-config   4.19.0-0.test-2025-01-20-085829-ci-ln-x2x5hkk-latest   True        False         False      127m   

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Jan 20, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-48250, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

- What I did
Added a clear CO degrade function, which is called after a successful invocation of an operator sync function.

- How to verify it
On a build without this fix:

  1. Degrade the operator. I did this is by scaling down the CVO and editing the releaseVersion field in the machine-config-operator-images configmap to a bad value. This will cause syncRenderConfig to fail and degrade the operator(visible in the CO object and operator logs).
  2. Now, deploy an MC update to the master pool. This will cause the operator to be stuck in syncRequiredMachineConfigPool sync function, where it'll wait until the master pool completes the update.
  3. While the master pool is still updating, restore the releaseVersion back to the original value. You should see the operator log clear up shortly, but the CO will continue to be degraded. Once the master pool is done updating, the CO degrade will clear up.

On a build with this fix:
Repeat steps 1 to 4 above. This time, you should notice that the CO degrade will clear up shortly after restoring releaseVersion, without having to wait for the master pool to complete the update.

Note: The update needs to be applied to a master pool because the syncRequiredMachineConfigPools function will only "trap" the operator for master pool updates.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants