kctrl dev reconciler does not report an Error back #1482
Labels
bug
This issue describes a defect or unexpected behavior
carvel-accepted
This issue should be considered for future work and that the triage process has been completed
What steps did you take:
We're using this code as library in our project (educates)[https://github.com/vmware-tanzu-labs/educates-training-platform] for deploying the platform as an Application reusing the capabilities of the dev command, as we only need to deploy our bits in the cluster but we don't need further reconciliation.
What happened:
When using this code as library , appReconciler.Reconcile ignores reconcile.Result as it's not giving any valuable information about the reconciliation status or errors if they happened at any of the internal phases (fetch, template, deploy).
What did you expect:
We expected that appReconciler.Reconcile would return a meaningful reconcile.Result with internal information of the reconciliation status, so that we could know if it failed or was successful and if it failed, on which phase (fetch, template, deploy) failed, and potentially the reason of failure. Currently there's only possible by looking at the logs printed on output.
As the real implementation of appReconciler is supposed to run in the cluster, this information is somehow added to the App status that lives in the cluster, but for dev mode where there's no App living in the cluster it'll be ideal to have this information reported back somehow to the consumer/client.
Anything else you would like to add:
Some conversation about this on Kubernetes slack
Environment:
kubectl get deployment -n kapp-controller kapp-controller -o yaml
and the annotation iskbld.k14s.io/images
): v0.50.0kubectl version
): IrrelevantVote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
The text was updated successfully, but these errors were encountered: