See README.md
for details of how to use this Run Book / System Operation Manual template.
Copyright © 2014-2016 Skelton Thatcher Consulting
Licenced under CC BY-SA 4.0
Service or system name:
What business need is met by this service or system? What expectations do we have about availability and performance?
(e.g. Provides reliable automated reconciliation of logistics transactions from the previous 24 hours)
What kind of system is this? Web-connected order processing? Back-end batch system? Internal HTTP-based API? ETL control system?
(e.g. Internal API for order reconciliation based on Ruby and RabbitMQ, deployed in Docker containers on Kubernetes)
What explicit or implicit expectations are there from users or clients about the availability of the service or system?
(e.g. Contractual 99.9% service availability outside of the 03:00-05:00 maintenance window)
Which team owns and runs this service or system?
(e.g. The Sneaky Sharks team (Bangalore) develops and runs this service: [email protected] / #sneaky-sharks on Slack / Extension 9265)
Which distinct software applications, daemons, services, etc. make up the service or system? What external dependencies does it have?
(e.g. Ruby app + RabbitMQ for source messages + PostgreSQL for reconciled transactions)
During what hours does the service or system actually need to operate? Can portions or features of the system be unavailable at times if needed?
(e.g. 03:00-01:00 GMT+0)
(e.g. 07:00-23:00 GMT+0)
How and where does data flow through the system? What controls or triggers data flows?
(e.g. mobile requests / scheduled batch jobs / inbound IoT sensor data )
What servers, containers, schedulers, devices, vLANs, firewalls, etc. are needed?
(e.g. '10+ Ubuntu 14 VMs on AWS IaaS + 2 AWS Regions + 2 VPCs per Region + Route53')
How is the system resilient to failure? What mechanisms for tolerating faults are implemented? How is the system/service made highly available?
(e.g. 2 Active-Active data centres across two cities + two or more nodes at each layer)
How can the system be throttled or partially shut down e.g. to avoid flooding other dependent systems? Can the throughput be limited to (say) 100 requests per second? etc. What kind of connection back-off schemes are in place?
(e.g. Commercial API gateway allows throttling control)
(e.g. Exponential backoff on all HTTP-based services + /health
healthcheck endpoints on all services)
Details of the expected throughput/traffic: call volumes, peak periods, quiet periods. What factors drive the load: bookings, page views, number of items in Basket, etc.
(e.g. Max: 1000 requests per second with 400 concurrent users - Friday @ 16:00 to Sunday @ 18:00, driven by likelihood of barbecue activity in the neighborhood)
_
_
_
What are the main differences between Production/Live and other environments? What kinds of things might therefore not be tested in upstream environments?
(e.g. Self-signed HTTPS certificates in Pre-Production - certificate expiry may not be detected properly in Production)
What tools are available to help operate the system?
(e.g. Use the queue-cleardown.sh
script to safely cleardown the processing queue nightly)
What compute, storage, database, metrics, logging, and scaling resources are needed? What are the minimum and expected maximum sizes (in CPU cores, RAM, GB disk space, GBit/sec, etc.)?
(e.g. Min: 4 VMs with 2 vCPU each. Max: around 40 VMs)
(e.g. Min: 10GB Azure blob storage. Max: around 500GB Azure blob storage)
(e.g. Min: 500GB Standard Tier RDS. Max: around 2TB Standard Tier RDS)
(e.g. Min: 100 metrics per node per minute. Max: around 6000 metrics per node per minute)
(e.g. Min: 60 log lines per node per minute (100KB). Max: around 6000 log lines per node per minute (1MB))
(e.g. Min: 10 encryption requests per node per minute. Max: around 100 encryption requests per node per minute)
What kind of security is in place for passwords and Personally Identifiable Information (PII)? Are the passwords hashed with a strong hash function and salted?
(e.g. Passwords are hashed with a 10-character salt and SHA265)
How will the system be monitored for security issues?
(e.g. The ABC tool scans for reported CVE issues and reports via the ABC dashboard)
How is configuration managed for the system?
(e.g. CloudInit bootstraps the installation of Puppet - Puppet then drives all system and application level configuration except for the XYZ service which is configured via App.config
files in Subversion)
How are configuration secrets managed?
(e.g. Secrets are managed with Hashicorp Vault with 3 shards for the master key)
Which parts of the system need to be backed up?
(e.g. Only the CoreTransactions database in PostgreSQL and the Puppet master database need to be backed up)
How does backup happen? Is service affected? Should the system be [partially] shut down first?
(e.g. Backup happens from the read replica - live service is not affected)
How does restore happen? Is service affected? Should the system be [partially] shut down first?
(e.g. The Booking service must be switched off before Restore happens otherwise transactions will be lost)
What log aggregation & search solution will be used?
(e.g. The system will use the existng in-house ELK cluster. 2000-6000 messages per minute expected at normal load levels)
What kind of log message format will be used? Structured logging with JSON?
log4j
style single-line output?
(e.g. Log messages will use log4j compatible single-line format with wrapped stack traces)
What significant events, state transitions and error events may be logged?
(e.g. IDs 1000-1999: Database events; IDs 2000-2999: message bus events; IDs 3000-3999: user-initiated action events; ...)
What significant metrics will be generated?
(e.g. Usual VM stats (CPU, disk, threads, etc.) + around 200 application technical metrics + around 400 user-level metrics)
How is the health of dependencies (components and systems) assessed? How does the system report its own health?
(e.g. Use /health
HTTP endpoint for internal components that expose it. Other systems and external endpoints: typically HTTP 200 but some synthetic checks for some services)
(e.g. Provide /health
HTTP endpoint: 200 --> basic health, 500 --> bad configuration + /health/deps
for checking dependencies)
How is the software deployed? How does roll-back happen?
(e.g. We use GoCD to coordinate deployments, triggering a Chef run pulling RPMs from the internal yum repo)
What kind of batch processing takes place?
(e.g. Files are pushed via SFTP to the media server. The system processes up to 100 of these per hour on a cron
schedule)
What needs to happen when machines are power-cycled?
(e.g. *** WARNING: we have not investigated this scenario yet! ***)
What kind of checks need to happen on a regular basis?
(e.g. All /health
endpoints should be checked every 60secs plus the synthetic transaction checks run every 5 mins via Pingdom)
How should troubleshooting happen? What tools are available?
(e.g. Use a combination of the /health
endpoint checks and the abc-*.sh
scripts for diagnosing typical problems)
How should patches be deployed and tested?
(e.g. Use the standard OS patch test cycle together with deployment via Jenkins and Capistrano)
(e.g. Use the early-warning notifications from UpGuard plus deployment via Jenkins and Capistrano)
Is the software affected by daylight-saving time changes (both client and server)?
(e.g. Server clocks all set to UTC+0. All date/time data converted to UTC with offset before processing)
Which data needs to be cleared down? How often? Which tools or scripts control cleardown?
(e.g. Use abc-cleardown.ps1
run nightly to clear down the document cache)
Is log rotation needed? How is it controlled?
(e.g. The Windows Event Log ABC Service is set to a maximum size of 512MB)
What needs to happen when parts of the system are failed over to standby systems? What needs to during recovery?
_
_
What tools or scripts are available to troubleshoot failover and recovery operations?
(e.g. Start with running SELECT state__desc FROM sys.database__mirroring__endpoints
on the PRIMARY node and then use the scripts in the db-failover Git repo)