Skip to content

Latest commit

 

History

History
648 lines (523 loc) · 21.7 KB

README.md

File metadata and controls

648 lines (523 loc) · 21.7 KB

Tornjak simple deployment with SPIRE k8s quickstart

In this tutorial, we will show how to configure Tornjak with a SPIRE deployment using the SPIRE k8s quickstart tutorial. This is heavily inspired by the SPIRE quickstart for Kubernetes.

This tutorial will get you up and running with a local deployment of SPIRE and Tornjak in three simple steps: setting up the deployment files, deployment, and connecting to Tornjak.

Contents

Step 0: Requirements

We tested this quickstart with the following requirements:

Step 1: Setup deployment files

Setting up k8s

For this tutorial, we will use minikube. If you have an existing kubernetes cluster, feel free to use that.

minikube start
😄  minikube v1.12.0 on Darwin 11.2
🎉  minikube 1.18.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.18.1
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=1989MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   79s   v1.18.3

Obtaining the Deployment Files

To obtain the relevant files, clone our git repository and cd into the correct directory:

git clone https://github.com/spiffe/tornjak.git
cd tornjak
cd docs/quickstart

Notice, the files in this directory are largely the same files as provided by the SPIRE quickstart for Kubernetes. However, there are some minor key differences. Notice the new ConfigMap file:

cat tornjak-configmap.yaml 

This configmap has contents to configure the Tornjak backend:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tornjak-agent
  namespace: spire
data:
  server.conf: |

    server {
      # location of SPIRE socket
      # here, set to default SPIRE socket path
      spire_socket_path = "unix:///tmp/spire-server/private/api.sock"

      # configure HTTP connection to Tornjak server
      http {
        enabled = true
        port = 10000 # opens at port 10000
      }
    }

    plugins {
      DataStore "sql" { # local database plugin
        plugin_data {
          drivername = "sqlite3"
          filename = "/run/spire/data/tornjak.sqlite3" # stores locally in this file
        }
      }
    }

More information on this config file format can be found in our config documentation.

Additionally, we have sample server-statefulset files in the directory server-statefulset-examples. We will copy one of them in depending on which deployment scheme you would like.

Choosing the Statefulset Deployment

These steps will be different depending on what deployment scheme makes sense for you. Note we have deprecated support of the use case where parts of Tornjak run on the same container as SPIRE.

Currently, we support two deployment schemes:

  1. Only the Tornjak backend (to make Tornjak API calls) is run as a separate container on the same pod that exposes only one port (to communicate with the Tornjak backend). It requires more deployment steps to deploy or use the frontend. However, this deployment type is fully-supported, has a smaller sidecar image without the frontend components, and ensures that the frontend and backend share no memory.
  2. The Tornjak frontend (UI) and backend run in the same container that exposes two separate ports (one frontend and one backend). This is useful for getting started with Tornjak with minimal deployment steps.

Choose one of the below to easily copy in the right server-statefulset file for you.

🔴 [Click] For the deployment of the Tornjak backend (API) and frontend (UI) (our default deployment recommended to those getting started)

This has the same architecture as deploying with just a Tornjak backend, but with an additional Tornjak frontend process deployed in the same container. This will expose two ports: one for the frontend and one for the backend.

There is an additional requirement to mount the SPIRE server socket and make it accessible to the Tornjak backend container.

The relevant file is called tornjak-sidecar-server-statefulset.yaml within the examples directory. Please copy to the relevant file as follows:

cp server-statefulset-examples/tornjak-sidecar-server-statefulset.yaml server-statefulset.yaml

The statefulset will look something like this, where we have commented leading with a 👈 on the changed or new lines:

cat server-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: spire-server
  namespace: spire
  labels:
    app: spire-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spire-server
  serviceName: spire-server
  template:
    metadata:
      namespace: spire
      labels:
        app: spire-server
    spec:
      serviceAccountName: spire-server
      containers:
        - name: spire-server
          image: ghcr.io/spiffe/spire-server:1.4.4
          args:
            - -config
            - /run/spire/config/server.conf
          ports:
            - containerPort: 8081
          volumeMounts:
            - name: spire-config
              mountPath: /run/spire/config
              readOnly: true
            - name: spire-data
              mountPath: /run/spire/data
              readOnly: false
            - name: socket                         # 👈 ADDITIONAL VOLUME
              mountPath: /tmp/spire-server/private # 👈 ADDITIONAL VOLUME
          livenessProbe:
            httpGet:
              path: /live
              port: 8080
            failureThreshold: 2
            initialDelaySeconds: 15
            periodSeconds: 60
            timeoutSeconds: 3
          readinessProbe:
            httpGet:
              path: /ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
        ### 👈 BEGIN ADDITIONAL CONTAINER ###
        - name: tornjak
          image: ghcr.io/spiffe/tornjak:latest
          imagePullPolicy: Always
          args:
            - -config
            - /run/spire/config/server.conf
            - -tornjak-config
            - /run/spire/tornjak-config/server.conf
          env: 
            - name: REACT_APP_API_SERVER_URI
              value: http://localhost:10000
            - name: NODE_OPTIONS
              value: --openssl-legacy-provider
          ports:
            - containerPort: 8081
          volumeMounts:
            - name: spire-config
              mountPath: /run/spire/config
              readOnly: true
            - name: tornjak-config
              mountPath: /run/spire/tornjak-config
              readOnly: true
            - name: spire-data
              mountPath: /run/spire/data
              readOnly: false
            - name: socket
              mountPath: /tmp/spire-server/private
        ### 👈 END ADDITIONAL CONTAINER ###
      volumes:
        - name: spire-config
          configMap:
            name: spire-server
        - name: tornjak-config  # 👈 ADDITIONAL VOLUME
          configMap:            # 👈 ADDITIONAL VOLUME
            name: tornjak-agent # 👈 ADDITIONAL VOLUME
        - name: socket          # 👈 ADDITIONAL VOLUME
          emptyDir: {}          # 👈 ADDITIONAL VOLUME
  volumeClaimTemplates:
    - metadata:
        name: spire-data
        namespace: spire
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

Note that there are three key differences in the StatefulSet file from that in the SPIRE quickstart:

  1. There is a new container in the pod named tornjak.
    1. This container uses environment variables to configure the Frontend.
    2. This container uses arguments to pass arguments to the Backend.
  2. We create a volume named tornjak-config that reads from the ConfigMap tornjak-agent.
  3. We create a volume named test-socket so that the containers may communicate
🔴 [Click] For the deployment of only the Tornjak backend (API)

There is an additional requirement to mount the SPIRE server socket and make it accessible to the Tornjak backend container.

The relevant file is called backend-sidecar-server-statefulset.yaml within the examples directory. Please copy to the relevant file as follows:

cp server-statefulset-examples/backend-sidecar-server-statefulset.yaml server-statefulset.yaml

The statefulset will look something like this, where we have commented leading with a 👈 on the changed or new lines:

cat server-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: spire-server
  namespace: spire
  labels:
    app: spire-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spire-server
  serviceName: spire-server
  template:
    metadata:
      namespace: spire
      labels:
        app: spire-server
    spec:
      serviceAccountName: spire-server
      containers:
        - name: spire-server
          image: ghcr.io/spiffe/spire-server:1.4.4
          args:
            - -config
            - /run/spire/config/server.conf
          ports:
            - containerPort: 8081
          volumeMounts:
            - name: spire-config
              mountPath: /run/spire/config
              readOnly: true
            - name: spire-data
              mountPath: /run/spire/data
              readOnly: false
            - name: socket                         # 👈 ADDITIONAL VOLUME
              mountPath: /tmp/spire-server/private # 👈 ADDITIONAL VOLUME
          livenessProbe:
            httpGet:
              path: /live
              port: 8080
            failureThreshold: 2
            initialDelaySeconds: 15
            periodSeconds: 60
            timeoutSeconds: 3
          readinessProbe:
            httpGet:
              path: /ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
        ### 👈 BEGIN ADDITIONAL CONTAINER ###
        - name: tornjak-backend
          image: ghcr.io/spiffe/tornjak-backend:latest
          args:
            - --config
            - /run/spire/config/server.conf
            - --tornjak-config
            - /run/spire/tornjak-config/server.conf
          ports:
            - containerPort: 8081
          volumeMounts:
            - name: spire-config
              mountPath: /run/spire/config
              readOnly: true
            - name: tornjak-config
              mountPath: /run/spire/tornjak-config
              readOnly: true
            - name: spire-data
              mountPath: /run/spire/data
              readOnly: false
            - name: socket
              mountPath: /tmp/spire-server/private
        ### 👈 END ADDITIONAL CONTAINER ###
      volumes:
        - name: spire-config
          configMap:
            name: spire-server
        - name: tornjak-config  # 👈 ADDITIONAL VOLUME
          configMap:            # 👈 ADDITIONAL VOLUME
            name: tornjak-agent # 👈 ADDITIONAL VOLUME
        - name: socket          # 👈 ADDITIONAL VOLUME
          emptyDir: {}          # 👈 ADDITIONAL VOLUME
  volumeClaimTemplates:
    - metadata:
        name: spire-data
        namespace: spire
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

Note that there are three key differences in this StatefulSet file from that in the SPIRE quickstart:

  1. There is a new container in the pod named tornjak-backend.
  2. We create a volume named tornjak-config that reads from the ConfigMap tornjak-agent.
  3. We create a volume named test-socket so that the containers may communicate.

This is all done specifically to pass the Tornjak config file as an argument to the container and to allow communication between Tornjak and SPIRE.

Step 2: Deployment of SPIRE and co-located Tornjak

Now that we have the correct deployment files, please follow the below steps to deploy Tornjak and SPIRE!

NOTE: In a windows environment, you will need to replace the backslashes ( \ ) below with backticks ( ` ) to copy and paste into a windows terminal

kubectl apply -f spire-namespace.yaml \
    -f server-account.yaml \
    -f spire-bundle-configmap.yaml \
    -f tornjak-configmap.yaml \
    -f server-cluster-role.yaml \
    -f server-configmap.yaml \
    -f server-statefulset.yaml \
    -f server-service.yaml

The above command should deploy the SPIRE server with Tornjak:

namespace/spire created
serviceaccount/spire-server created
configmap/spire-bundle created
configmap/tornjak-agent created
role.rbac.authorization.k8s.io/spire-server-configmap-role created
rolebinding.rbac.authorization.k8s.io/spire-server-configmap-role-binding created
clusterrole.rbac.authorization.k8s.io/spire-server-trust-role created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-trust-role-binding created
configmap/spire-server created
statefulset.apps/spire-server created
service/spire-server created
service/tornjak-backend-http created
service/tornjak-backend-tls created
service/tornjak-backend-mtls created
service/tornjak-frontend created

Before continuing, check that the spire-server is ready:

kubectl get statefulset --namespace spire
NAME           READY   AGE
spire-server   1/1     26s

NOTE: You may initially see a 0/1 for READY status. Just wait a few minutes and then try again

Deploying the agent and creating test entries

The following steps will configure and deploy the SPIRE agent. NOTE: In a windows environment, you will need to replace the backslashes ( \ ) below with backticks ( ` ) to copy and paste into a windows terminal

kubectl apply \
    -f agent-account.yaml \
    -f agent-cluster-role.yaml \
    -f agent-configmap.yaml \
    -f agent-daemonset.yaml
serviceaccount/spire-agent created
clusterrole.rbac.authorization.k8s.io/spire-agent-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-cluster-role-binding created
configmap/spire-agent created
daemonset.apps/spire-agent created
kubectl get daemonset --namespace spire
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
spire-agent   1         1         1       1            1           <none>          19s

Then, we can create a registration entry for the node.

NOTE: In a windows environment, you will need to replace the backslashes ( \ ) below with backticks ( ` ) to copy and paste into a windows terminal

kubectl exec -n spire -c spire-server spire-server-0 -- \
    /opt/spire/bin/spire-server entry create \
    -spiffeID spiffe://example.org/ns/spire/sa/spire-agent \
    -selector k8s_sat:cluster:demo-cluster \
    -selector k8s_sat:agent_ns:spire \
    -selector k8s_sat:agent_sa:spire-agent \
    -node
Entry ID         : 03d0ec2b-54b7-4340-a0b9-d3b2cf1b041a
SPIFFE ID        : spiffe://example.org/ns/spire/sa/spire-agent
Parent ID        : spiffe://example.org/spire/server
Revision         : 0
TTL              : default
Selector         : k8s_sat:agent_ns:spire
Selector         : k8s_sat:agent_sa:spire-agent
Selector         : k8s_sat:cluster:demo-cluster

And we create a registration entry for the workload registrar, specifying the workload registrar's SPIFFE ID:

NOTE: In a windows environment, you will need to replace the backslashes ( \ ) below with backticks ( ` ) to copy and paste into a windows terminal

kubectl exec -n spire -c spire-server spire-server-0 -- \
    /opt/spire/bin/spire-server entry create \
    -spiffeID spiffe://example.org/ns/default/sa/default \
    -parentID spiffe://example.org/ns/spire/sa/spire-agent \
    -selector k8s:ns:default \
    -selector k8s:sa:default
Entry ID         : 11a367ab-7095-4390-ab89-34dea5fddd61
SPIFFE ID        : spiffe://example.org/ns/default/sa/default
Parent ID        : spiffe://example.org/ns/spire/sa/spire-agent
Revision         : 0
TTL              : default
Selector         : k8s:ns:default
Selector         : k8s:sa:default

Finally, here we deploy a workload container:

kubectl apply -f client-deployment.yaml
deployment.apps/client created

And also verify that the container can access the workload API UNIX domain socket:

kubectl exec -it $(kubectl get pods -o=jsonpath='{.items[0].metadata.name}' \
   -l app=client)  -- /opt/spire/bin/spire-agent api fetch -socketPath /run/spire/sockets/agent.sock
Received 1 svid after 8.8537ms

SPIFFE ID:		spiffe://example.org/ns/default/sa/default
SVID Valid After:	2021-04-06 20:13:02 +0000 UTC
SVID Valid Until:	2021-04-06 21:13:12 +0000 UTC
CA #1 Valid After:	2021-04-06 20:12:20 +0000 UTC
CA #1 Valid Until:	2021-04-07 20:12:30 +0000 UTC

Let's verify that the spire-server-0 pod is now started with the new image:

kubectl -n spire describe pod spire-server-0 | grep "Image:"

or, on Windows:

kubectl -n spire describe pod spire-server-0 | select-string "Image:"

Should yield two lines depending on which deployment you used:

    Image:         ghcr.io/spiffe/spire-server:1.4.4
    Image:         <TORNJAK-IMAGE>

where <TORNJAK-IMAGE> is ghcr.io/spiffe/tornjak:latest if you deployed the Tornjak with the UI and is ghcr.io/spiffe/tornjak-backend:latest if you deployed only the Tornjak backend.

Step 3: Configuring Access to Tornjak

Step 3a: Connecting to the Tornjak backend to make Tornjak API calls

The Tornjak HTTP server is running on port 10000 on the pod. This can easily be accessed by performing a local port forward using kubectl. This will cause the local port 10000 to proxy to the Tornjak HTTP server.

kubectl -n spire port-forward spire-server-0 10000:10000

You'll see something like this:

Forwarding from 127.0.0.1:10000 -> 10000
Forwarding from [::1]:10000 -> 10000

While this runs, open a browser to

http://localhost:10000/api/tornjak/serverinfo

This output represents the backend response. Now you should be able to make Tornjak API calls!

tornjak-agent-browser

Step 3b: Connecting to the Tornjak frontend to access the Tornjak UI

Make sure that the backend is accessible from your browser at http://localhost:10000, as above, or the frontend will not work.

If you chose to deploy Tornjak with the UI, connecting to the UI is very simple. Otherwise, you can always run the UI locally and connect. See the two choices below:

🔴 [Click] Connect to the Tornjak frontend that is deployed on Minikube

Note that if you chose to deploy the Tornjak image that includes the frontend component, you only need to execute the following command to enable access to the frontend that is already running:

kubectl -n spire port-forward spire-server-0 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
🔴 [Click] Run the Tornjak frontend locally

You will need to deploy the separate frontend separately to access the exposed Tornjak backend. We have prebuilt the frontend in a container, so we can simply run it via a single docker command in a separate terminal, which will take a couple minutes to run:

docker run -p 3000:3000 -e REACT_APP_API_SERVER_URI='http://localhost:10000' ghcr.io/spiffe/tornjak-frontend:latest 

After the image is downloaded, you will eventually see the following output:

> [email protected] start
> react-scripts --openssl-legacy-provider start

ℹ 「wds」: Project is running at http://172.17.0.3/
ℹ 「wds」: webpack output is served from 
ℹ 「wds」: Content not from webpack is served from /usr/src/app/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...

Compiled successfully!

You can now view tornjak-frontend in the browser.

  Local:            http://localhost:3000
  On Your Network:  http://172.17.0.3:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

Note, it will likely take a few minutes for the applicaiton to compile successfully.

Either of the above steps exposes the frontend at http://localhost:3000. If you visit in your browser, you should see this page:

tornjak-ui

Cleanup

Here are the steps to clean the deployed entities. First, we delete the workload container:

kubectl delete deployment client

Then, delete the spire agent and server, along with the namespace we created:

kubectl delete namespace spire

NOTE: You may need to wait a few minutes for the action to complete and the prompt to return

Finally, we can delete the ClusterRole and ClusterRoleBinding:

kubectl delete clusterrole spire-server-trust-role spire-agent-cluster-role
kubectl delete clusterrolebinding spire-server-trust-role-binding spire-agent-cluster-role-binding