This directory contains a set of bases that can directly be used to deploy the
ShinyProxy Operator. It also contains some example deployments in the overlays
directory.
Before showing how to deploy the operator, this section describes the components and dependencies of the operator.
-
Operator: the operator itself which manages the different ShinyProxy servers.
-
ShinyProxy: the ShinyProxy servers, these host the Shiny apps. You do not need to create these servers manually, since these are created by the operator. Instead, you define which servers to create, and the operator creates all necessary Kubernetes resources, without affecting any existing server or causing downtime.
-
Redis: Redis is used by ShinyProxy (not by the operator) to implement session and app persistence. This ensures that when a ShinyProxy server is replaced, the user is still logged in and all apps remain active. Redis is always required when using the operator. When deploying Redis on the Kubernetes cluster, we advise to use Redis Sentinel such that Redis is run in a high-available way. It is also possible to use a Redis server provided by cloud providers.
Note: when deploying to production, it is important to change the password used to secure Redis. Each example (see below) already changes the password to
mySecurePassword12
. For an example see theoverlays/1-namespaced/patches/redis.secret.yaml
file. Make sure to change the password before deploying, see changing Redis password for instructions on how to change the password after deploying.
This section provides a step-by-step tutorial on the basic deployment of the ShinyProxy operator on minikube.
-
This tutorial requires that you install some tools:
-
Start minikube:
minikube start --kubernetes-version='v1.30.3' --addons=metrics-server,ingress --container-runtime=containerd
-
Clone this repository and change the working directory:
git clone https://github.com/openanalytics/shinyproxy-operator cd shinyproxy-operator/docs/deployment/overlays/1-namespaced
-
Apply all resources
kustomize build . | kubectl apply -f - --server-side
Note: this command may not finish successfully from the first attempt, for example, you could get the following message:
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1" unable to recognize "STDIN": no matches for kind "ShinyProxy" in version "openanalytics.eu/v1"
In this case, just re-run the command. The resources should then get created. (there is no way to specify the order of resources or the dependencies between resources in
kustomize
, re-running the command is the only workaround) -
Wait for all the resources to startup. At this point the operator should start. It is now time to configure web access to the cluster. First get the IP of minikube using:
minikube ip
Next, add the following entries to
/etc/hosts
, replacingMINIKUBE_IP
by the output of the previous command;MINIKUBE_IP shinyproxy-demo.local MINIKUBE_IP shinyproxy-demo2.local
-
Once all deployments are finished, you can access ShinyProxy at
shinyproxy-demo.local
. You will get a security warning from your browser because if the invalid (self-signed) certificate. You can safely bypass this warning during this example. -
Wait until the ShinyProxy instance is fully started. (before you will see a
Not Found
page). -
Try to launch an application and keep this application running.
-
Change something in the
resources/shinyproxy.shinyproxy.yaml
file. For example change thetitle
property and instruct the operator to create two ShinyProxy replicas:apiVersion: openanalytics.eu/v1 kind: ShinyProxy metadata: name: shinyproxy namespace: shinyproxy spec: # ... proxy: store-mode: Redis stop-proxies-on-shutdown: false title: ShinyProxy 2 # <- MAKE THE CHANGE HERE # ... replicas: 2 # <- ADD THIS LINE image: openanalytics/shinyproxy:3.1.1 imagePullPolicy: Always fqdn: shinyproxy-demo.local
-
Apply this change using
kubectl
:kubectl apply -f resources/shinyproxy.shinyproxy.yaml
The operator now deploys a new ShinyProxy instance. The old instance will be kept intact as long as a Websocket connection is active on the old instance. The old instance will automatically be removed once it no longer has any open Websocket connections. New requests will immediately be handled by the new server as soon as it is ready. Try going to the main page of ShinyProxy and check whether the change your made has been applied.
-
Try the other examples. The following commands first remove the current example, next you can open another example (e.g.
2-clustered
) and deploy it usingkubectl
:kubectl delete namespace/shinyproxy kubectl delete namespace/shinyproxy-operator # may fail kubectl delete namespace/shinyproxy-dept2 # may fail kubectl delete namespace/my-namespace # may fail kubectl delete namespace/redis # may fail cd ../2-clustered kustomize build . | kubectl apply -f -
The Operator is designed to be flexible and fit many type of deployments. This repository includes examples for many kinds of deployments:
-
1-namespaced:
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local
This is a very simple deployment of the operator, where everything runs in the same namespace.
- Operator-mode:
-
2-clustered:
- Operator-mode:
clustered
- Operator-namespace:
shinyproxy-operator
- Redis-namespace:
redis
- ShinyProxy-namespace:
shinyproxy
andshinyproxy-dept2
- URLs:
https://shinyproxy-demo.local
https://shinyproxy-demo2.local
In this example, the operator runs in
clustered
mode. Therefore, the operator will look into all namespaces forShinyProxy
resources and deploy these resources in their respective namespace. This example also demonstrates how the Operator can be used in a multi-tenancy or multi-realm way. Each ShinyProxy server runs in its own namespace, isolated from the other servers. However, they are managed by a single operator. - Operator-mode:
-
3-namespaced-app-ns:
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local
Similar to example 1, however, the
01_hello
app will now run in themy-namespace
namespace instead of theshinyproxy
namespace. In addition to the change in theshinyproxy.shinyproxy.yaml
file, this configuration requires the definition of the extra namespace and the modification of theServiceAccount
of the ShinyProxy server. - Operator-mode:
-
4-namespaced-multi:
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local/shinyproxy1/
https://shinyproxy-demo.local/shinyproxy2/
https://shinyproxy-demo.local/shinyproxy3/
Based on the second example, this example shows how multi-tenancy can be achieved using sub-paths instead of multiple domain names. Each ShinyProxy server is made available at the same domain name but at a different path under that domain name.
- Operator-mode:
The CustomResourceDefinition
of the operator can be found in the
bases/namespaced/operator/crd.yaml
directory (the CRD is equal for clustered
and namespaced
deployments). The following sections of this file are
important:
spring
: config related to Spring, such as the redis connection informationproxy
: the configuration of ShinyProxy, this is the same configuration as if you were manually deploying ShinyProxykubernetesPodTemplateSpecPatches
: allows to patch thePodTemplate
of the ReplicaSet created by the operator (see the example)kubernetesIngressPatches
: allows to patch theIngress
resources created by the operator (see the example)kubernetesServicePatches
: allows to patch theService
resources created by the operator (see the example)image
: the docker image to use for the ShinyProxy server ( e.g.openanalytics/shinyproxy:3.1.1
)imagePullPolicy
: the pull policy for ShinyProxy Image; the default value isIfNotPresent
; valid options areNever
,IfNotPresent
andAlways
.fqdn
: the FQDN at which the service should be available, e.g. ` shinyproxy-demo.localadditionalFqdns
: (optional) a list of additional FQDNs that can be used to access this ShinyProxy serverappNamespaces
: a list of namespaces in which apps will be deployed. This is only needed when you change the namespace of an app using thekubernetes-pod-patches
feature. The namespace of the operator and ShinyProxy instance are automatically includedantiAffinityTopologyKey
: the topology key to use in the anti-affinity configuration of the ShinyProxy podsantiAffinityRequired
: if enabled, the anti-affinity configuration rules arerequired
instead ofpreferred
The ShinyProxy Operator automatically creates an ingress resource for each
ShinyProxy resource you create. This ingress resource points to the correct
Kubernetes service (which is also created by the operator). The created Ingress
resource contains everything that is needed for a working ShinyProxy deployment.
However, in some cases it is required to modify the resource. This can be
achieved using the kubernetesIngressPatches
field. This field should contain a
string which contains a list of JSON Patches to apply
to the Ingress resource. The above examples already include the following patch:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesIngressPatches: |
- op: add
path: /metadata/annotations
value:
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 300m
- op: add
path: /spec/ingressClassName
value: nginx
- op: add
path: /spec/tls
value:
- hosts:
- shinyproxy-demo.local
# secretName: example # uncomment and change this line if needed
image: openanalytics/shinyproxy:3.1.1
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
The first patch adds some additional annotations to the ShinyProx resource. For
example, in order to set up a redirect from HTTP to HTTPS. The second patch
changes the ingressClassName to nginx
. Finally, the last patch configures TLS
for the ingress resource. In a production environment, you can uncomment the
line with the secretName
to refer to a proper secret. Any patch is accepted,
but make sure that the resulting Ingress resource still works for the ShinyProxy
Deployment. The ShinyProxy Operator logs the manifest before and after applying
the patch, this can be useful while creating the patches.
Note: the previous section only applies to version 2 of the operator. Version 1 behaves differently since it used Skipper as (intermediate) ingress controller.
The Operator automatically creates a ReplicaSet for each ShinyProxy resource you
create. This ReplicaSet contains a PodTemplate
, which contains all necessary
settings for creating a proper ShinyProxy pod. In a lot of cases, it can be
useful to adapt this PodTemplate
for the specific context in which ShinyProxy
is running. For example, it's a good idea to specify the resource requests and
limits, or sometimes it's required to add a toleration to the pod. These
modification can be achieved using the kubernetesPodTemplateSpecPatches
field.
This field should contain a string which contains a list
of JSON Patches to apply to the PodTemplate
. The
above examples already include the following patch:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/containers/0/env/-
value:
name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
- op: add
path: /spec/containers/0/resources
value:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 1Gi
- op: add
path: /spec/serviceAccountName
value: shinyproxy-sa
image: openanalytics/shinyproxy:3.1.1
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
The above configuration contains three patches. The first patch adds an
environment variable with the password used for connecting to the Redis server.
The second patch configures the resource limits and requests of the ShinyProxy
pod. Finally, the last patch configures the ServiceAccount
of the pod.
Note: it is important when using this feature to not break any existing configuration of the pod. For example, when you want to mount additional configmaps, use the following configuration:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes/-
value:
name: myconfig
configMap:
name: some-configmnap
- op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: /mnt/configmap
name: myconfig
readOnly: true
In this example, the path
property of the patch always ends with a -
, this
indicates that the patch adds a new entry to the end of the array
( e.g. spec/volumes/
).
The following patch breaks the behavior of the ShinyProxy pod and should therefore not be used:
# NOTE: this is a demo of a WRONG configuration - do not use
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes
value:
- name: myconfig
configMap:
name: some-configmnap
- op: add
path: /spec/containers/0/volumeMounts
value:
- mountPath: /mnt/configmap
name: myconfig
readOnly: true
This patch replaces the existing /spec/volumes
and /spec/containers/0/volumeMounts
arrays of the pod. The ShinyProxy Operator
automatically creates a mount for a configmap which contains the ShinyProxy
configuration. By overriding these mounts, this configmap is not be mounted and
the default (demo) configuration of ShinyProxy is loaded.
The ShinyProxy Operator automatically creates a Service resource for each
ShinyProxy resource you create. The created Service resource contains everything
that is needed for a working ShinyProxy deployment. However, in some cases it is
required to modify the resource. This can be achieved using
the kubernetesServicePatches
field. This field should contain a string which
contains a list of JSON Patches to apply to the
Service resource. For example:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesServicePatches: |
- op: add
path: /metadata/annotations
value:
my-annotation: my-value
image: openanalytics/shinyproxy:3.1.1
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
This example patch adds the annotation my-annotation: my-value
to the Service
resource created by the operator.
Starting with version 2.1.0, the operator automatically
adds anti-affinity
rules, such that Kubernetes will try to not schedule multiple ShinyProxy
replicas on the same Kubernetes node. Note that this only has effect when
running multiple replicas of ShinyProxy. If Kubernetes is unable to satisfy the
requirement, it will still schedule multiple replicas on the same node. This
behavior can be changed by setting antiAffinityRequired
to true
in your
ShinyProxy configuration. It is also possible to change the topology, by setting
the antiAffinityTopologyKey
, e.g. to not run multiple replicas in the same
availability zone you can set this property to topology.kubernetes.io/zone
.
Each example changes the password to mySecurePassword12
. It's important to
change this password in your environment. Ideally, the password must be changed
before deploying Redis for the first time, since changing the password after
initial deployment requires deleting all data.
In order to change the password after deployment:
Note: during this process ShinyProxy will be stopped and all apps and users are stopped!
-
change the password in the yaml file (e.g. in
overlays/1-namespaced/patches/redis.secret.yaml
) -
stop ShinyProxy by removing the ShinyProxy resource (pods of running apps are not automatically removed):
kubectl delete shinyproxy -n shinyproxy shinyproxy
-
delete all Redis related resources:
kubectl delete statefulset -n shinyproxy redis-node kubectl delete pvc -n shinyproxy redis-data-redis-node-0 kubectl delete pvc -n shinyproxy redis-data-redis-node-1 kubectl delete pvc -n shinyproxy redis-data-redis-node-2
-
wait for all related pods to be stopped and all resources to be removed
-
check that the
PersistentVolumes
of Redis are removed usingkubectl get pv
-
re-deploy Redis and ShinyProxy:
kustomize build . | kubectl apply -f - --server-side