-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Ability to configure controlled cloudflared deployment spec (and more resources) #128
Comments
Got that. 🤔 When I build the operator for the "managed cloudflared connector", I did realize that there would be lots of potential customizations. So I just make the simplest cloudflared runs in the cluster. For now, you could set the replica down to 0 to disable managed cloudflared and bring yours. I will consider redesigning the managed cloudflared connector part, and I do not have any ideas yet. What do you think about that? Do you have any suggestions about the new design for managed cloudflared connector? ❤️ |
From user perspective, that uses helm chart to deploy the controller. I'd expect controlled-cloudflared pod spec to be configurable using helm variables. Ideally using well-known fields, which would represent regular of pod template spec. Something like: # most of values here can be helm defaults
controlledCloudflared:
podTemplate:
labels:
something: test
annotations:
other: thing
replicas: 2
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
supplementalGroups: [4000]
topologySpreadConstraints:
- maxSkew: 1
topologyKey: zone
whenUnsatisfiable: DoNotSchedule
volumes:
- name: sec-ctx-vol
emptyDir: {}
containerTemplate:
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
additionalContainers: {}
podDisruptionBudget:
maxUnavailable: 1 There are several ways how these helm variables could be passed to controller. One of which is to store pod template spec as ConfigMap using Helm template, hopefully use some native kubernetes sdk approach to validate it at runtime (and fail controller readiness on failed validation) and use CM value to assemble Deployment spec. |
Also it would require a way to update controlled-cloudflared Deployment spec on configuration change. Either via:
|
For now I'm using Kyverno to patch the Deployment and generate PDB on the fly. apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: controlled-cloudflared-patch
annotations:
policies.kyverno.io/title: Patch controlled-cloudflared Deployment
policies.kyverno.io/category: Best Practices
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Deployment
policies.kyverno.io/description: >-
Patch controllerd-cloudflared Deployment's Pod spec to have
required and best-practices fields. Also inject
PodDisruptionBudget. All with goal of having better
operability.
spec:
generateExisting: true
useServerSideApply: true
rules:
- name: inject-fields-into-container-spec
match:
any:
- resources:
kinds:
- Deployment
namespaces:
- ingress
names:
- controlled-cloudflared-connector
mutate:
patchStrategicMerge:
spec:
template:
spec:
containers:
- (name): controlled-cloudflared-connector
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 1
memory: 500Mi
livenessProbe:
httpGet:
path: /ready
port: 44483
scheme: HTTP
readinessProbe:
httpGet:
path: /ready
port: 44483
scheme: HTTP
startupProbe:
httpGet:
path: /ready
port: 44483
scheme: HTTP
failureThreshold: 60
periodSeconds: 10
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
runAsGroup: 65532
seccompProfile:
type: RuntimeDefault
securityContext:
sysctls:
- name: net.ipv4.ping_group_range
value: 0 65536
topologySpreadConstraints:
- labelSelector:
matchLabels:
app: controlled-cloudflared-connector
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
- name: create-pdb
match:
any:
- resources:
kinds:
- Deployment
namespaces:
- ingress
names:
- controlled-cloudflared-connector
generate:
apiVersion: policy/v1
kind: PodDisruptionBudget
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
minAvailable: 1
selector: "{{request.object.spec.selector}}" |
Hi @z0rc , I am not sure about embedding a completed Also from @omegazeng 's proposal, let user could modify the I noticed you do not take the I am not sure how to make a flexible Thanks in advance. 🫡 |
I didn't put all possible pod spec fields in example, only those I was mainly interested in. Let's think this though. Consider those only as suggestions, and not as complete design. First, I'd split command:
- cloudflared
args:
- --no-autoupdate
- tunnel
- --metrics
- 0.0.0.0:44483
- run
- --token
- <token> Third, I'd add to option to pass extra args using helm templates, so resulting values would be: controlledCloudflared:
containerTemplate:
# check https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/tunnel-run-parameters/ for list of availble options
extraArgs: []
# - --protocol
# - http2 Which would be used in helm template something like this: command:
- cloudflared
args:
- --no-autoupdate
- tunnel
- --metrics
- 0.0.0.0:44483
- run
- --token
- <token>
{{- with .Values.controlledCloudflared.containerTemplate.extraArgs }}
{{ toYaml . }}
{{- end }} |
Personally I consider the regular pod template is the easiest form to configure. There is still must be a way to keep strictly necessary spec fields (cloudflared container with command and required args), but rest of of pod template would be easy to understand as it follows kubernetes api. On the other side implementing own inteface to configure pod template spec wouldn't provide any benefits, but will require maintenance of complex translation layer, which must follow kubenretes api changes (infrequent but still happens from time to time). |
Update: AFAIU |
I need to have ability to set number of fields in controlled cloudflared deployment's pod template spec:
Missing by default readiness and liveness probes is bummer, but I can set up them myself, if allowed.
In general it's expected to be able to control (and override) every field in Deployment's pod template spec.
In addition I need to have ability to deploy PodDisruptionBudget resource alongside the Deployment.
The text was updated successfully, but these errors were encountered: