Template for fully specified JupyterHub deployment with hubploy.
hubploy does not manage your cloud resources - only your Kubernetes resources. You should use some other means to create your cloud resources. At a minimum, hubploy expects a Kubernetes cluster with [helm installed](https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub/setup-helm.html). Many installations want to use a shared file system for home directories, so in those cases you want to have that managed outside hubploy as well.
You also need the following tools installed:
- Your cloud vendor's commandline tool.
- Google Cloud SDK for Google Cloud
- AWS CLI for AWS
- Azure CLI for Azure
- A local install of helm 2. Note that helm 3 is not
supported yet. The client version should match the version on your server (you
can find your server version with
helm version
. - A docker environment that you can use. This is only needed when building images.
python3 -m venv .
source bin/activate
python3 -m pip install -r requirements.txt
This installs hubploy its dependencies
Each directory inside deployments/
represents an installation of
JupyterHub. The default is called myhub
, but please rename it to
something more descriptive. git commit
the result as well.
git mv deployments/myhub deployments/<your-hub-name>
git commit
You need to find all things marked TODO and fill them in. In particular,
hubploy.yaml
needs information about where your docker registry & kubernetes cluster is, and paths to access keys as well.secrets/prod.yaml
andsecrets/staging.yaml
require secure random keys you can generate and fill in.
Make sure tha appropriate docker credential helper is installed, so hubploy can push to the registry you need.
For AWS, you need docker-ecr-credential-helper For Google Cloud, you need the gcloud commandline tool
Make sure you are in your repo's root directory, so hubploy can find the directory structure it expects.
Build and push the image to the registry
hubploy build <hub-name> --push --check-registry
This should check if the user image for your hub needs to be rebuilt, and if so, it’ll build and push it.
Each hub will always have two versions - a staging hub that isn’t used by actual users, and a production hub that is. These two should be kept as similar as possible, so you can fearlessly test stuff on the staging hub without feaer that it is going to crash & burn when deployed to production.
To deploy to the staging hub,
hubploy deploy <hub-name> hub staging
This should take a while, but eventually return successfully. You can then find the public IP of your hub with:
kubectl -n <hub-name>-staging get svc proxy-public
If you access that, you should be able to get in with any username & password. It might take a minute to be able to be accessible.
The defaults provision each user their own EBS / Persistent Disk, so this can get expensive quickly :) Watch out!
You can now customize your hub in two major ways:
- Customize the hub image. repo2docker is used to build the image,
so you can put any of the supported configuration files under
deployments/<hub-image>/image
. You must make a git commit after modifying this forhubploy build <hub-name> --push --check-registry
to work, since it uses the commit hash as the image tag. - Customize hub configuration with various YAML files.
hub/values.yaml
is common to all hubs that exist in this repo (multiple hubs can live underdeployments/
).deployments/<hub-name>/config/common.yaml
is where most of the config specific to each hub should go. Examples include memory / cpu limits, home directory definitions, etcdeployments/<hub-name>/config/staging.yaml
anddeployments/<hub-name>/config/prod.yaml
are files specific to the staging & prod versions of the hub. These should be as minimal as possible. Ideally, only DNS entries, IP addresses, should be here.deployments/<hub-name>/secrets/staging.yaml
anddeployments/<hub-name>/secrets/prod.yaml
- should contain information that mustn't be public. This would be proxy / hub secret tokens, any authentication tokens you have, etc. These files must be protected by something like git-crypt or `sops <https://github.com/mozilla/sops`_. THIS REPO TEMPLATE DOES NOT HAVE THIS PROTECTION SET UP YET
You can customize the staging hub, deploy it with hubploy deploy <hub-name> hub staging
, and iterate until you like how it behaves.
You can then do a production deployment with: hubploy deploy <hub-name> hub prod
, and
test it out!
git-crypt is used to keep encrypted secrets in the git repository. We would eventually like to use something like sops but for now...
Install git-crypt. You can get it from brew or your package manager.
In your repo, initialize it.
git crypt init
In
.gitattributes
have the following contents:deployments/*/secrets/** filter=git-crypt diff=git-crypt deployments/**/secrets/** filter=git-crypt diff=git-crypt support/secrets.yaml filter=git-crypt diff=git-crypt
Make a copy of your encryption key. This will be used to decrypt the secrets. You will need to share it with your CD provider, and anyone else.
git crypt export-key key
This puts the key in a file called 'key'
Get a base64 copy of your key
cat key | base64
Put it as a secret named GIT_CRYPT_KEY in github secrets.
Make sure you change the myhub to your deployment name in the workflows under .github/workflows.
Push to the staging branch, and check out GitHub actions, to see if your action goes to completion.
If the staging action succeeds, make a PR from staging to prod, and merge this PR. This should also trigger an action - see if this works out.
Note: Always make a PR from staging to prod, never push directly to prod. We want to keep the staging and prod branches as close to each other as possible, and this is the only long term guaranteed way to do that.
- What kinda kubernetes setup this needs