-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate the impact of the auto-scale-down jobs #157
Comments
Summary:Backup containers
Databases
S2I Builds
Email verification services specifically
BC Registries FDW Database
Others
DetailsMonitored Applications Affected:
Others Affected
|
I've spun the application pods back up and reviewed the environments for any other containers that were spun down. Next step is to review and identify what can be done to update the affected application pods. |
Summary here; #157 (comment) |
Closing this. The investigation is complete. Addressing the issues is covered by #158 |
Platform services has started running jobs that scale down any pods that have not been updated (rolled out) in over a year. These scripts will be run every Tuesday from now on.
The idea is to eliminate any abandoned projects and free the associated resources as well as attempt to encourage best practices around pod/application maintenance.
The best practice set forth is to rebuild and redeploy application pods at least once a month in order to pick updates and patches performed to the base image(s). This will have knock-on effects in some of our projects such as those dependent on aca-py images.
As a workaround, application pods can be rolled out, this updates the resource manifests to include the current date.
For now we want to review the pods that did get scaled down and identify what's needed to updated them. We also want to identify what other pods may have been scaled down since there are some services in the
tools
and deployment environments we don't activity monitor.A separate ticket will be opened to discuss and design the update strategy moving forward.
The text was updated successfully, but these errors were encountered: