-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy ProxySQL on kubernetes #4
Comments
I identified two ways we can deploy a ProxySQL high-availability cluster. Using Kubernetes-style approachThis solution is showcased in this repo: https://github.com/ProxySQL/kubernetes Basically, it makes each ProxySQL standalone, without using native clustering capabilities. It relies on Kubernetes ConfigMap to maintain the config, in a This is easy to deploy and maintain, however it is not very suited for migration from our current deployment. Using native ProxySQL clustering capabilitiesIntroduced early 2020 in 1.4.2: https://proxysql.com/blog/proxysql-cluster/ Current documentation: https://proxysql.com/documentation/ProxySQL-Cluster/ It doesn't seem to have evolved much in the mean time. Advantages:
Drawbacks:
They don't seem to have a concept of master/slave instances at the moment. However, from what I understand, their implementation can still somewhat simulate this, as adding an instance to another instance's config only enables it to pull newer config from it (which is what makes one a "master"). So you could spin up an instance without registering it to every other instance, and it would only lack the ability of being pulled config from others, effectively making it a "slave". So we could work-around this lack of functionality by only keeping a handful of "fully" joined instances in the config and maintaining it manually. If they were deployed in our cluster, that would be very easy as they could be referenced via DNS as "proxysql-0", "proxysql-1", "proxysql-2". They would act as masters from which we can update config from. We could use this functionality to add ProxySQL to our Kubernetes cluster, synchronizing it with the current main instance. Config would only be editable from our main instance. Later on, we could let one or multiple of the in-Kubernetes ProxySQL instance(s) become our master(s) to complete the migration process. TL;DR: Basic, but enough features are implemented for simple config-replication as one would expect for a stateless service. I haven't tried this yet, this is only the result of reading doc, articles, and their |
Honestly I'm liking the first solution more. The admin interface can be pretty jank to work with, so I believe that a config file may be more manageable from that angle too. Are you able to export the current production config to a file and shoot that over so I can take a look (if that's possible)? Would need to be kept private as I think it contains IPs at very least. |
If you don't actually need the admin interface, then 1st solution is obviously worth looking into. Current config can be exported to the config file format using However, we cannot synchronize this new config to ProxySQL instances outside of Kubernetes. Let's say we want to 100% migrate to ProxySQL in-Kubernetes then. We can expose the service from all Kubernetes nodes as NodePort inside our VPC. However, the DigitalOcean Kubernetes nodes IP addresses are not static and we also need some kind of load balancing to avoid all the traffic going through one node (although it would share the CPU load among all ProxySQL instances inside the cluster, it's a lot of bandwidth to handle on 1 node). |
As an update, in-Kubernetes ProxySQL seems to be working as expected on private osu-web testing. We should probably start wiring more in-Kubernetes services to it rather than our legacy deployment, and then figure out how to open it to non-k8s droplets. |
The version we are running is quite outdated, and has a rare tendency to fall over. It would be beneficial if we can run one (or more) instances on kubernetes, to allow for easier upgrades and better resilience.
Things that need consideration:
mysql-dump
.For reference, ProxySQL is high cpu, low everything-else
The text was updated successfully, but these errors were encountered: