Based on "Kubernetes Setup Using Ansible and Vagrant" repository. See details here This one adds Nexus and jenkins deployment to kubernetes using helm charts.
For the purpose of clarity parts of this repo is developed in other repositories and included in this repository. For detailed commit history please check these repositories.
Work in progress
- Kubernetes cluster setup
- Jenkins install
- Nexus install
- java sample app repository
- Helm chart for java application
- Jenkinsfile
- Dockerfile
- Jenkins trigger
- Build
- Test
- Push artifacts to nexus
- Test full steps
-
For Linux:
- Vagrant
- VirtualBox
- Ansible
-
For Mac:
- Vagrant
- VirtualBox
- Ansible
-
Vagrant should be installed on your machine. Installation binaries can be found here
-
Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant's official documentation.
-
Ansible should be installed in your machine. Refer to the Ansible installation guide for platform specific installation
- Clone this repository to your computer.
- On your terminal of choice go to repository root folder and run
Install-Linux.sh
orInstall-Mac.sh
depending on your operating system. You need to do this only once. - Setup scripts installs prerequisites on your computer
- After initial setup script finishes your computer will be ready to run
up.sh
up.sh
script provisions k8s master and nodes using VirtualBox. This may take 5-15 minutes depending on your machine configuration.
- Sample java App
- Dockerizing Sample java App
- Kubernetes cluster setup using Ansible and Vagrant
- Deploying Jenkins and Nexus
- Helm Charts
- Deploying Sample App
- Jenkins pipeline
- FAQ
- Recommended Reading
The repository contains a simple Java application which outputs the string "Hello world!" and is accompanied by a couple of unit tests to check that the main application works as expected. The results of these tests are saved to a JUnit XML report.
package com.mycompany.app;
/**
* Hello world!
*/
public class App
{
private final String message = "Hello World!";
public App() {}
public static void main(String[] args) {
System.out.println(new App().getMessage());
}
private final String getMessage() {
return message;
}
}
Multistage approach is used to build docker image.
#
# Build stage
#
FROM maven:3.5-jdk-8 AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
#FROM openjdk:11-jre-slim
FROM gcr.io/distroless/java
COPY --from=build /home/app/target/my-app-1.0-SNAPSHOT.jar /usr/local/lib/my-app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/my-app.jar"]
Vagrant is a tool that will allow us to create a virtual environment easily. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker. In this setup VirtualBox used as provider. Kubernetes cluster that will consist of one master and n worker nodes is provisioned by using Ansible playbooks. This setup provides a production-like cluster that can be setup on your local machine without needing manual configuration. Detailed explanation of the Kubernetes setup steps required to setup a multi node Kubernetes cluster for development purposes can be found here
Kubernetes Setup Using Ansible and Vagrant
This repo is based on the Kubernetes.io blog post about setting up Kubernetes cluster using ansible and vagrant.
For more details see blog post
-
Vagrant should be installed on your machine. Installation binaries can be found here
-
Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant's official documentation.
-
Ansible should be installed in your machine. Refer to the Ansible installation guide for platform specific installation
-
Install scripts are provided in the repository for Linux and Mac.
- The value of IMAGE_NAME can be changed to reflect desired
vagrant base image
. - The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2.
IMAGE_NAME = "ubuntu/focal64"
N = 2
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 2
end
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.hostname = "k8s-master"
master.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/master-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.50.10",
}
end
end
(1..N).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
node.vm.hostname = "node-#{i}"
node.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/node-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.50.#{i + 10}",
}
end
end
end
- Created two files named
master-playbook.yml
andnode-playbook.ym
l in the directorykubernetes-setup
. These files contains master and notes respectively.
- Following packages installed, and then a user named
“vagrant”
added to the“docker”
group.- docker-ce
- docker-ce-cli
- containerd.io
---
- hosts: all
become: true
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- name: Add an apt signing key for Docker
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add apt repository for stable version
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install docker and its dependecies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- docker-ce
- docker-ce-cli
- containerd.io
notify:
- docker status
- name: Add vagrant user to docker group
user:
name: vagrant
group: docker
-Kubelet will not start if the system has swap enabled, so we are disabling swap using the below code
- name: Remove swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none
- name: Disable swap
command: swapoff -a
when: ansible_swaptotal_mb > 0
-Installing kubelet, kubeadm and kubectl using the below code
- name: Add an apt signing key for Kubernetes
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes.list
- name: Install Kubernetes binaries
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- kubelet
- kubeadm
- kubectl
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
- name: Restart kubelet
service:
name: kubelet
daemon_reload: yes
state: restarted
- Initialize the Kubernetes cluster with kubeadm using the below code (applicable only on master node)
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
- Setup the kube config file for the vagrant user to access the Kubernetes cluster using the below code
- name: Setup kubeconfig for vagrant user
command: "{{ item }}"
with_items:
- mkdir -p /home/vagrant/.kube
- cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
- chown vagrant:vagrant /home/vagrant/.kube/config
- Setup the container networking provider and the network policy engine using the below code.
- name: Install calico pod network
become: false
command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
- Generate kube join command for joining the node to the Kubernetes cluster and store the command in the file named join-command.
- name: Generate join command
command: kubeadm token create --print-join-command
register: join_command
- name: Copy join command to local file
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
-Setup a handler for checking Docker daemon using the below code.
handlers:
- name: docker status
service: name=docker state=started
- Create a file named
node-playbook.yml
in the directorykubernetes-setup
. - Added code from steps 2.1 -2.3 to
node-playbook.yml
. - Add the code below into
node-playbook.yml
. - Add the code from step 2.7 to finish this playbook
- name: Copy the join command to server location
copy: src=join-command dest=/tmp/join-command.sh mode=0777
- name: Join the node to cluster
command: sh /tmp/join-command.sh
vagrant up
-Upon completion of all the above steps, the Kubernetes cluster should be up and running. We can login to the master or worker nodes using Vagrant as follows:
$ ## Accessing master
$ vagrant ssh k8s-master
vagrant@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.13.3
node-1 Ready <none> 12m v1.13.3
node-2 Ready <none> 6m22s v1.13.3
$ ## Accessing nodes
$ vagrant ssh node-1
$ vagrant ssh node-2
To apply nexus and jenkins setup in the ansible process Vagrant file must be modified to accommodate new ansible playbook. After adding vagrant user to docker user group, user must logoff and login again. Since we are automating this step. We have to reload the node for changes to take effect. Vagrant Reload plugin is used to achieve this effect https://github.com/aidanns/vagrant-reload After reload vagrant runs app-playbook.yml
node.vm.provision :reload
node.vm.provision "ansible" do |ansible|
ansible.playbook = "app-setup/app-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.50.#{i + 10}",
}
end
This playbook first installs kube config to vagrant node. Then installs helm and pip which is required from the deployment steps. Installation steps can be found here
-
Get your 'admin' user password by running: kubectl exec --namespace default -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
-
Get the Jenkins URL to visit by running these commands in the same shell: echo http://127.0.0.1:8080 kubectl --namespace default port-forward svc/jenkins 8080:8080
-
Login with the password from step 1 and the username: admin
-
Configure security realm and authorization strategy
-
Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos
For more information on running Jenkins on Kubernetes, visit: https://cloud.google.com/solutions/jenkins-on-container-engine
For more information about Jenkins Configuration as Code, visit: https://jenkins.io/projects/jcasc/
Helm chart is created for sample java app. This Helm Chart and others are hosted at Helm-Charts repository
Helm repositories should be hosted in different website but for this project I used github's raw view
You can simply use the following command to add this chart repository to your helm:
$ helm repo add okutkan 'https://raw.githubusercontent.com/okutkan/helm-charts/master/'
$ helm repo update
$ helm search simplejavaapp
NAME VERSION DESCRIPTION
okutkan/simplejavaapp 0.1.2 A Hel
helm package JavaMavenSampleApp # builds the tgz file and copy it here
helm repo index . # create or update the index.yaml for repo
git add . # you know how this works
git commit -m 'New chart version'
- Chart.yaml: Contains main chart definition such as name version etc. .
- values.yaml: Contains default values for templates
- templates\deployment.yaml: Contains deployment definition for sample app's Kubernetes deployment. Image name, imagePullPolicy, container port and environmentt variables are defined here
- templates\service.yaml: Contains service definition for Kubernetes. service connection type and node port defined here.
- to be detailed
pipeline {
agent {
docker {
image 'maven: 3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
environment {
NEXUS_VERSION = "nexus3"
NEXUS_PROTOCOL = "http"
NEXUS_URL = "192.168.50.11:8081"
NEXUS_REPOSITORY = "maven-nexus-repo"
NEXUS_CREDENTIAL_ID = "nexus-user-credentials"
imageid = "maven-nexus-repo/sampleapp"
registryCredential = 'nexus-user-credentials'
CLUSTER_URL = "HTTPS://192.168.50.1:8443"
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Building image') {
steps{
script {
docker.build imageid + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $imageid:$BUILD_NUMBER"
}
}
stage("Publish to Nexus Repository Manager") {
steps {
script {
pom = readMavenPom file: "pom.xml";
filesByGlob = findFiles(glob: "target/*.${pom.packaging}");
echo "${filesByGlob[0].name} ${filesByGlob[0].path} ${filesByGlob[0].directory} ${filesByGlob[0].length} ${filesByGlob[0].lastModified}"
artifactPath = filesByGlob[0].path;
artifactExists = fileExists artifactPath;
if(artifactExists) {
echo "*** File: ${artifactPath}, group: ${pom.groupId}, packaging: ${pom.packaging}, version ${pom.version}";
nexusArtifactUploader(
nexusVersion: NEXUS_VERSION,
protocol: NEXUS_PROTOCOL,
nexusUrl: NEXUS_URL,
groupId: pom.groupId,
version: pom.version,
repository: NEXUS_REPOSITORY,
credentialsId: NEXUS_CREDENTIAL_ID,
artifacts: [
[artifactId: pom.artifactId,
classifier: '',
file: artifactPath,
type: pom.packaging],
[artifactId: pom.artifactId,
classifier: '',
file: "pom.xml",
type: "pom"]
]
);
} else {
error "*** File: ${artifactPath}, could not be found";
}
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
}
}
stage("Deploy"){
withKubeConfig([credentialsId: 'kubernetes-sa',
serverUrl: CLUSTER_URL
]) {
sh """
helm init --client-only
helm upgrade \
sampleApp \
--namespace default \
--install \
--wait \
--set image=$imageid \
./helm
"""
}
}
}
- to be detailed
- https://www.ansible.com/blog/automating-helm-using-ansible
- https://galaxy.ansible.com/kubernetes/core
- https://artifacthub.io/packages/helm/sonatype/nexus-repository-manager
- https://artifacthub.io/packages/helm/jenkinsci/jenkins
- https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-helm-v3
- https://blog.sonatype.com/workflow-automation-publishing-artifacts-to-nexus-using-jenkins-pipelines
- https://medium.com/appfleet/publishing-artifacts-to-sonatype-nexus-using-jenkins-pipelines-db8c1412dc7