v0.2.0
The theme of this release is usability improvements and more granular control over node placement.
Features such as specifying etcd endpoints directly on the cluster spec eliminate the need to provide a manual
configuration for custom etcd endpoints. Per-cluster etcd environments will allow users to collocate multiple m3db
clusters on a single etcd cluster.
Users can now specify more complex affinity terms, and specify taints that their cluster tolerates to allow dedicating
specific nodes to M3DB. See the affinity docs for more.
- [FEATURE] Allow specifying of etcd endpoints on M3DBCluster spec (#99)
- [FEATURE] Allow specifying security contexts for M3DB pods (#107)
- [FEATURE] Allow specifying tolerations of M3DB pods (#111)
- [FEATURE] Allow specifying pod priority classes (#119)
- [FEATURE] Use a dedicated etcd-environment per-cluster to support sharing etcd clusters (#99)
- [FEATURE] Support more granular node affinity per-isolation group (#106) (#131)
- [ENHANCEMENT] Change default M3DB bootstrapper config to recover more easily when an entire cluster is taken down
(#112) - [ENHANCEMENT] Build + release with Go 1.12 (#114)
- [ENHANCEMENT] Continuously reconcile configmaps (#118)
- [BUGFIX] Allow unknown protobuf fields to be unmarshalled (#117)
- [BUGFIX] Fix pod removal when removing more than 1 pod at a time (#125)
Breaking Changes
0.2.0 changes how M3DB stores its cluster topology in etcd to allow for multiple M3DB clusters to share an etcd cluster.
A migration script is provided to copy etcd data from the old format to the new format. If migrating an
operated cluster, run that script (see script for instructions) and then rolling restart your M3DB pods by deleting them
one at a time.
If using a custom configmap, this same change will require a modification to your configmap. See the
warning in the docs about how to ensure your configmap is compatible.