diff --git a/2018/CDI-DataVolumes.html b/2018/CDI-DataVolumes.html index 0819b97262..4ec187ed55 100644 --- a/2018/CDI-DataVolumes.html +++ b/2018/CDI-DataVolumes.html @@ -356,7 +356,7 @@

Creating a VirtualMach source: http: url: "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2" - running: true + runStrategy: Always template: metadata: labels: diff --git a/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html b/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html index 2d08040e98..d1b8b52c2a 100644 --- a/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html +++ b/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html @@ -627,7 +627,7 @@

Deploying Virtual Machines

kubevirt.io/ovm: cirros name: cirros spec: - running: false + runStrategy: Halted template: metadata: creationTimestamp: null diff --git a/2018/KubeVirt-objects.html b/2018/KubeVirt-objects.html index a71792d9e3..f893b20423 100644 --- a/2018/KubeVirt-objects.html +++ b/2018/KubeVirt-objects.html @@ -633,19 +633,19 @@

OfflineVirtualMachine

What is Running in OfflineVirtualMachine?

-

.spec.running controls whether the associated VirtualMachine object is created. In other words this changes the power status of the virtual machine.

+

.spec.runStrategy controls whether and when the associated VirtualMachineInstance object is created. In other words this controls the power status of the virtual machine.

-
  running: true
+
  runStrategy: Always
 
-

This will create a VirtualMachine object which will instantiate and power on a virtual machine.

+

This will create a VirtualMachineInstance object which will instantiate and power on a virtual machine.

-
kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"running":true }}' -n nodejs-ex
+
kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"runStrategy": "Always"}}' -n nodejs-ex
 
-

This will delete the VirtualMachine object which will power off the virtual machine.

+

This will delete the VirtualMachineInstance object which will power off the virtual machine.

-
kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"running":false }}' -n nodejs-ex
+
kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"runStrategy": "Halted"}}' -n nodejs-ex
 

And if you would rather not have to remember the kubectl patch command above diff --git a/2018/ignition-support.html b/2018/ignition-support.html index ed79e34cb8..041bcb7aa3 100644 --- a/2018/ignition-support.html +++ b/2018/ignition-support.html @@ -312,7 +312,7 @@

Step 1

metadata: name: myvm1 spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/2019/How-To-Import-VM-into-Kubevirt.html b/2019/How-To-Import-VM-into-Kubevirt.html index 1fb3f25494..95638b3299 100644 --- a/2019/How-To-Import-VM-into-Kubevirt.html +++ b/2019/How-To-Import-VM-into-Kubevirt.html @@ -445,7 +445,7 @@

DataVolume VM Behavior

kubevirt.io/vm: vm-alpine-datavolume name: vm-alpine-datavolume spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/2019/More-about-Kubevirt-metrics.html b/2019/More-about-Kubevirt-metrics.html index 2c6c0d8fd8..7866935594 100644 --- a/2019/More-about-Kubevirt-metrics.html +++ b/2019/More-about-Kubevirt-metrics.html @@ -303,7 +303,7 @@

New metrics

kubevirt.io/vm: vm-test-01 name: vm-test-01 spec: - running: false + runStrategy: Halted template: metadata: creationTimestamp: null diff --git a/2020/Common_templates.html b/2020/Common_templates.html index 403f880f20..1c8a7cb719 100644 --- a/2020/Common_templates.html +++ b/2020/Common_templates.html @@ -310,7 +310,7 @@
To start VM from created object

An alternative way to start the VM is with the oc patch command. Example:

-

$ oc patch virtualmachine rheltinyvm --type merge -p '{"spec":{"running":true}}'

+

$ oc patch virtualmachine rheltinyvm --type merge -p '{"spec":{"runStrategy":"Always"}}'

As soon as VM starts, openshift creates a new type of object - VirtualMachineInstance. It has a similar name to VirtualMachine.

diff --git a/2020/KubeVirt-Architecture-Fundamentals.html b/2020/KubeVirt-Architecture-Fundamentals.html index 90643b9297..981d05481f 100644 --- a/2020/KubeVirt-Architecture-Fundamentals.html +++ b/2020/KubeVirt-Architecture-Fundamentals.html @@ -307,7 +307,7 @@

The Declarative KubeVirt Vi

The Kubernetes core apis have this concept of layering objects on top of one another through the use of workload controllers. For example, the Kubernetes ReplicaSet is a workload controller layered on top of pods. The ReplicaSet controller manages ensuring that there are always ‘x’ number of pod replicas running within the cluster. If a ReplicaSet object declares that 5 pod replicas should be running, but a node dies bringing that total to 4, then the ReplicaSet workload controller manages spinning up a 5th pod in order to meet the declared replica count. The workload controller is always reconciling on the ReplicaSet objects desired state.

-

Using this established Kubernetes pattern of layering objects on top of one another, we came up with our own virtualization specific API and corresponding workload controller called a “VirtualMachine” (big surprise there on the name, right?). Users declare a VirtualMachine object just like they would a pod by posting the VirtualMachine object’s manifest to the cluster. The big difference here that deviates from how pods are managed is that we allow VirtualMachine objects to be declared to exist in different states. For example, you can declare you want to “start” a virtual machine by setting “running: true” on the VirtualMachine object’s spec. Likewise you can declare you want to “stop” a virtual machine by setting “running: false” on the VirtualMachine object’s spec. Behind the scenes, setting the “running” field to true or false results in the workload controller creating or deleting a pod for the virtual machine to live in.

+

Using this established Kubernetes pattern of layering objects on top of one another, we came up with our own virtualization specific API and corresponding workload controller called a “VirtualMachine” (big surprise there on the name, right?). Users declare a VirtualMachine object just like they would a pod by posting the VirtualMachine object’s manifest to the cluster. The big difference here that deviates from how pods are managed is that we allow VirtualMachine objects to be declared to exist in different states. For example, you can declare you want to “start” a virtual machine by setting “runStrategy: Always” on the VirtualMachine object’s spec. Likewise you can declare you want to “stop” a virtual machine by setting “runStrategy: Halted” on the VirtualMachine object’s spec. Behind the scenes, setting the “runStrategy” field results in the workload controller creating or deleting a pod for the virtual machine to live in.

In the end, we essentially created the concept of an immortal VirtualMachine by laying our own custom API on top of mortal pods. Our API and controller knows how to resurrect a “stopped” VirtualMachine by constructing a pod with all the right network, storage volumes, cpu, and memory attached to in order to accurately bring the VirtualMachine back to life with the exact same state it stopped with.

diff --git a/2020/KubeVirt-VM-Image-Usage-Patterns.html b/2020/KubeVirt-VM-Image-Usage-Patterns.html index 00438061a0..8efd09ef1f 100644 --- a/2020/KubeVirt-VM-Image-Usage-Patterns.html +++ b/2020/KubeVirt-VM-Image-Usage-Patterns.html @@ -620,7 +620,7 @@

Example: Launching kubevirt.io/vm: nginx name: nginx spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/2020/KubeVirt-installing_Microsoft_Windows_from_an_iso.html b/2020/KubeVirt-installing_Microsoft_Windows_from_an_iso.html index a3f0753c5f..fa804dd28d 100644 --- a/2020/KubeVirt-installing_Microsoft_Windows_from_an_iso.html +++ b/2020/KubeVirt-installing_Microsoft_Windows_from_an_iso.html @@ -298,7 +298,7 @@

Preparation

metadata: name: win2k12-iso spec: - running: false + runStrategy: Halted template: metadata: labels: @@ -427,7 +427,7 @@

Preparation

metadata: name: win2k12-iso spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/2020/Monitoring-KubeVirt-VMs-from-the-inside.html b/2020/Monitoring-KubeVirt-VMs-from-the-inside.html index a41ff71d1b..515ede34fe 100644 --- a/2020/Monitoring-KubeVirt-VMs-from-the-inside.html +++ b/2020/Monitoring-KubeVirt-VMs-from-the-inside.html @@ -385,7 +385,7 @@

Deploying a VirtualM metadata: name: monitorable-vm spec: - running: true + runStrategy: Always template: metadata: name: monitorable-vm diff --git a/2020/Multiple-Network-Attachments-with-bridge-CNI.html b/2020/Multiple-Network-Attachments-with-bridge-CNI.html index f44e17e624..b418aae046 100644 --- a/2020/Multiple-Network-Attachments-with-bridge-CNI.html +++ b/2020/Multiple-Network-Attachments-with-bridge-CNI.html @@ -765,7 +765,7 @@

metadata: name: vma spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -819,7 +819,7 @@

metadata: name: vmb spec: - running: true + runStrategy: Always template: spec: nodeSelector: diff --git a/2020/run_strategies.html b/2020/run_strategies.html index 8596dac05f..2fc2b6d1e6 100644 --- a/2020/run_strategies.html +++ b/2020/run_strategies.html @@ -284,6 +284,7 @@

Four RunStrategies currently exist
  • Always: If a VM is stopped for any reason, a new instance will be spawned.
  • RerunOnFailure: If a VM ends execution in an error state, a new instance will be spawned. This addressed the second concern listed above. If a user halts a VM manually a new instance will not be spawned.
  • +
  • Once: The VM will run once and not be restarted upon completion regardless if the completion is of phase Failure or Success.
  • Manual: This is exactly what it means. KubeVirt will neither attempt to start or stop a VM. In order to change state, the user must invoke start/stop/restart from the API. There exist convenience functions in the virtctl command line client as well.
  • Halted: The VM will be stopped if it’s running, and will remain off.
  • diff --git a/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html b/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html index a3cd52ea00..64d0aab38d 100644 --- a/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html +++ b/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html @@ -766,7 +766,7 @@

    Inspecting the output

    pvc: name: PVC_NAME namespace: PVC_NAMESPACE - running: false + runStrategy: Halted template: metadata: labels: diff --git a/2021/Running-Realtime-Workloads.html b/2021/Running-Realtime-Workloads.html index 9cdcb0b178..87e77f9751 100644 --- a/2021/Running-Realtime-Workloads.html +++ b/2021/Running-Realtime-Workloads.html @@ -340,7 +340,7 @@

    The Manifest

    name: fedora-realtime namespace: poc spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/2021/intel-vgpu-kubevirt.html b/2021/intel-vgpu-kubevirt.html index e42b9c25af..a1f342262a 100644 --- a/2021/intel-vgpu-kubevirt.html +++ b/2021/intel-vgpu-kubevirt.html @@ -634,7 +634,7 @@

    Install Windows

    metadata: name: win10vm1 spec: - running: false + runStrategy: Halted template: metadata: creationTimestamp: null diff --git a/2022/Virtual-Machines-with-MetalLB.html b/2022/Virtual-Machines-with-MetalLB.html index 448c97b174..c6a2d25fc6 100644 --- a/2022/Virtual-Machines-with-MetalLB.html +++ b/2022/Virtual-Machines-with-MetalLB.html @@ -382,7 +382,7 @@

    Spin up a Virtual Machine runni labels: metallb-service: nginx spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/2023/KubeVirt-on-autoscaling-nodes.html b/2023/KubeVirt-on-autoscaling-nodes.html index c7df08c6e1..4e16709915 100644 --- a/2023/KubeVirt-on-autoscaling-nodes.html +++ b/2023/KubeVirt-on-autoscaling-nodes.html @@ -588,7 +588,7 @@

    Deploy a VM to test

    metadata: name: testvm spec: - running: true + runStrategy: Always template: spec: domain: diff --git a/2023/OVN-kubernetes-secondary-networks-localnet.html b/2023/OVN-kubernetes-secondary-networks-localnet.html index 51a8513255..5ec8f035bd 100644 --- a/2023/OVN-kubernetes-secondary-networks-localnet.html +++ b/2023/OVN-kubernetes-secondary-networks-localnet.html @@ -412,7 +412,7 @@

    Spin up the VMs

    metadata: name: vm-server spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -460,7 +460,7 @@

    Spin up the VMs

    metadata: name: vm-client spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -650,7 +650,7 @@

    Spin up the VMs

    metadata: name: vm-red-1 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -698,7 +698,7 @@

    Spin up the VMs

    metadata: name: vm-red-2 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -746,7 +746,7 @@

    Spin up the VMs

    metadata: name: vm-blue-1 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -794,7 +794,7 @@

    Spin up the VMs

    metadata: name: vm-blue-2 spec: - running: true + runStrategy: Always template: spec: nodeSelector: diff --git a/2023/OVN-kubernetes-secondary-networks-policies.html b/2023/OVN-kubernetes-secondary-networks-policies.html index 5354207c19..177c2bc58b 100644 --- a/2023/OVN-kubernetes-secondary-networks-policies.html +++ b/2023/OVN-kubernetes-secondary-networks-policies.html @@ -384,7 +384,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm1 name: vm1 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -438,7 +438,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm2 name: vm2 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -492,7 +492,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm3 name: vm3 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -546,7 +546,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm4 name: vm4 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -600,7 +600,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm5 name: vm5 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -654,7 +654,7 @@

    Limiting ingress to a KubeVirt VM

    kubevirt.io/vm: vm6 name: vm6 spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/2023/OVN-kubernetes-secondary-networks.html b/2023/OVN-kubernetes-secondary-networks.html index ec4d478484..3ea5d6bf82 100644 --- a/2023/OVN-kubernetes-secondary-networks.html +++ b/2023/OVN-kubernetes-secondary-networks.html @@ -364,7 +364,7 @@

    Spin up the VMs

    metadata: name: vm-server spec: - running: true + runStrategy: Always template: spec: domain: @@ -414,7 +414,7 @@

    Spin up the VMs

    metadata: name: vm-client spec: - running: true + runStrategy: Always template: spec: domain: diff --git a/404.html b/404.html index b4b822c5a0..9523720356 100644 --- a/404.html +++ b/404.html @@ -53,7 +53,7 @@ - + diff --git a/application-aware-quota/index.html b/application-aware-quota/index.html index 5cd777f221..d0d41f9b95 100644 --- a/application-aware-quota/index.html +++ b/application-aware-quota/index.html @@ -53,7 +53,7 @@ - + diff --git a/applications-aware-quota/index.html b/applications-aware-quota/index.html index 16311fd9eb..8751d45830 100644 --- a/applications-aware-quota/index.html +++ b/applications-aware-quota/index.html @@ -53,7 +53,7 @@ - + diff --git a/assets/2020-02-14-KubeVirt-installing_Microsoft_Windows_from_an_iso/win2k12.yml b/assets/2020-02-14-KubeVirt-installing_Microsoft_Windows_from_an_iso/win2k12.yml index c06826e8a3..0c720ca402 100644 --- a/assets/2020-02-14-KubeVirt-installing_Microsoft_Windows_from_an_iso/win2k12.yml +++ b/assets/2020-02-14-KubeVirt-installing_Microsoft_Windows_from_an_iso/win2k12.yml @@ -15,7 +15,7 @@ kind: VirtualMachine metadata: name: win2k12-iso spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/assets/2020-06-22-win_workload_in_k8s/vm_testvm.yaml b/assets/2020-06-22-win_workload_in_k8s/vm_testvm.yaml index 83a12a0ffd..3ba41251dc 100644 --- a/assets/2020-06-22-win_workload_in_k8s/vm_testvm.yaml +++ b/assets/2020-06-22-win_workload_in_k8s/vm_testvm.yaml @@ -8,7 +8,7 @@ metadata: special: key name: testvm spec: - running: true + runStrategy: Always template: metadata: creationTimestamp: null diff --git a/blogs/community.html b/blogs/community.html index 746a8c3b0b..659b8c943a 100644 --- a/blogs/community.html +++ b/blogs/community.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/date.html b/blogs/date.html index 1164b12a5d..b71d988912 100644 --- a/blogs/date.html +++ b/blogs/date.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/index.html b/blogs/index.html index 916fb046bb..8c284f44fd 100644 --- a/blogs/index.html +++ b/blogs/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/news.html b/blogs/news.html index 847c08d72f..bdd2077a9c 100644 --- a/blogs/news.html +++ b/blogs/news.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page10/index.html b/blogs/page10/index.html index 1bb419c13b..ba8948e03e 100644 --- a/blogs/page10/index.html +++ b/blogs/page10/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page11/index.html b/blogs/page11/index.html index 334291f9d4..4f8522cc1c 100644 --- a/blogs/page11/index.html +++ b/blogs/page11/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page12/index.html b/blogs/page12/index.html index d59ee6d78a..4051039686 100644 --- a/blogs/page12/index.html +++ b/blogs/page12/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page13/index.html b/blogs/page13/index.html index 908b29bf00..ef45545f2f 100644 --- a/blogs/page13/index.html +++ b/blogs/page13/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page14/index.html b/blogs/page14/index.html index 82103d1fd8..e8067d3cd2 100644 --- a/blogs/page14/index.html +++ b/blogs/page14/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page15/index.html b/blogs/page15/index.html index 8412f7ae46..743433ff19 100644 --- a/blogs/page15/index.html +++ b/blogs/page15/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page16/index.html b/blogs/page16/index.html index 9b03715866..5c2ebe0f41 100644 --- a/blogs/page16/index.html +++ b/blogs/page16/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page17/index.html b/blogs/page17/index.html index dc85d3a165..c91591bae9 100644 --- a/blogs/page17/index.html +++ b/blogs/page17/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page18/index.html b/blogs/page18/index.html index d91e4d2f10..895811230a 100644 --- a/blogs/page18/index.html +++ b/blogs/page18/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page19/index.html b/blogs/page19/index.html index 4a58e1de11..e83c5b3994 100644 --- a/blogs/page19/index.html +++ b/blogs/page19/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page2/index.html b/blogs/page2/index.html index f7b65f2d58..72cdd35feb 100644 --- a/blogs/page2/index.html +++ b/blogs/page2/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page20/index.html b/blogs/page20/index.html index 967d1cb7f4..fa6ad1017a 100644 --- a/blogs/page20/index.html +++ b/blogs/page20/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page21/index.html b/blogs/page21/index.html index 7b430b40c8..a6ae376b00 100644 --- a/blogs/page21/index.html +++ b/blogs/page21/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page22/index.html b/blogs/page22/index.html index dd0da3f204..9ac398166f 100644 --- a/blogs/page22/index.html +++ b/blogs/page22/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page23/index.html b/blogs/page23/index.html index 4f658b81a0..babcc2e887 100644 --- a/blogs/page23/index.html +++ b/blogs/page23/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page24/index.html b/blogs/page24/index.html index e7b4d0985a..dbd60be54f 100644 --- a/blogs/page24/index.html +++ b/blogs/page24/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page25/index.html b/blogs/page25/index.html index 56030c7c93..841fade488 100644 --- a/blogs/page25/index.html +++ b/blogs/page25/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page26/index.html b/blogs/page26/index.html index 8c36f791cb..f1127bd18f 100644 --- a/blogs/page26/index.html +++ b/blogs/page26/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page27/index.html b/blogs/page27/index.html index 4c2c677516..ebd9ce80d3 100644 --- a/blogs/page27/index.html +++ b/blogs/page27/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page28/index.html b/blogs/page28/index.html index 0d43c9f4a9..bb645b6247 100644 --- a/blogs/page28/index.html +++ b/blogs/page28/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page29/index.html b/blogs/page29/index.html index ddf830819b..11317adff5 100644 --- a/blogs/page29/index.html +++ b/blogs/page29/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page3/index.html b/blogs/page3/index.html index 354cec70c9..3111f93d32 100644 --- a/blogs/page3/index.html +++ b/blogs/page3/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page30/index.html b/blogs/page30/index.html index f788670196..79e25b964b 100644 --- a/blogs/page30/index.html +++ b/blogs/page30/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page31/index.html b/blogs/page31/index.html index 6c8c0e35a6..7c89dd8a83 100644 --- a/blogs/page31/index.html +++ b/blogs/page31/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page32/index.html b/blogs/page32/index.html index 52af3cf8c5..844a55b792 100644 --- a/blogs/page32/index.html +++ b/blogs/page32/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page33/index.html b/blogs/page33/index.html index 3d442430ab..63bc807a04 100644 --- a/blogs/page33/index.html +++ b/blogs/page33/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page34/index.html b/blogs/page34/index.html index 894f5cf79c..07443066c6 100644 --- a/blogs/page34/index.html +++ b/blogs/page34/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page35/index.html b/blogs/page35/index.html index 6c847ce364..48b8f9115c 100644 --- a/blogs/page35/index.html +++ b/blogs/page35/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page36/index.html b/blogs/page36/index.html index 30e7c6defe..b198bb65ee 100644 --- a/blogs/page36/index.html +++ b/blogs/page36/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page37/index.html b/blogs/page37/index.html index d2ba1ab5ac..63b3b9645f 100644 --- a/blogs/page37/index.html +++ b/blogs/page37/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page4/index.html b/blogs/page4/index.html index 9f7ee721df..23246ac82a 100644 --- a/blogs/page4/index.html +++ b/blogs/page4/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page5/index.html b/blogs/page5/index.html index d50164a4e0..ed210dd7f5 100644 --- a/blogs/page5/index.html +++ b/blogs/page5/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page6/index.html b/blogs/page6/index.html index 9b838a0955..31d69245e9 100644 --- a/blogs/page6/index.html +++ b/blogs/page6/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page7/index.html b/blogs/page7/index.html index 4e0f2674d6..3a66ef44ac 100644 --- a/blogs/page7/index.html +++ b/blogs/page7/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page8/index.html b/blogs/page8/index.html index 1c4805228d..418244e455 100644 --- a/blogs/page8/index.html +++ b/blogs/page8/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/page9/index.html b/blogs/page9/index.html index 0b3b2e8fcb..b0df78e6a7 100644 --- a/blogs/page9/index.html +++ b/blogs/page9/index.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/releases.html b/blogs/releases.html index 4c224267fb..f3b3faf4a1 100644 --- a/blogs/releases.html +++ b/blogs/releases.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/uncategorized.html b/blogs/uncategorized.html index 66cd16a504..82f4ebc382 100644 --- a/blogs/uncategorized.html +++ b/blogs/uncategorized.html @@ -53,7 +53,7 @@ - + diff --git a/blogs/updates.html b/blogs/updates.html index 422140c19f..240f5e48f2 100644 --- a/blogs/updates.html +++ b/blogs/updates.html @@ -53,7 +53,7 @@ - + diff --git a/category/community.html b/category/community.html index 9f71d30a9c..5997610374 100644 --- a/category/community.html +++ b/category/community.html @@ -53,7 +53,7 @@ - + diff --git a/category/news.html b/category/news.html index d408d8abdf..e8429106f6 100644 --- a/category/news.html +++ b/category/news.html @@ -53,7 +53,7 @@ - + diff --git a/category/releases.html b/category/releases.html index 50d80165f2..e302d62ea5 100644 --- a/category/releases.html +++ b/category/releases.html @@ -53,7 +53,7 @@ - + diff --git a/category/uncategorized.html b/category/uncategorized.html index 781727c7e5..3318bad543 100644 --- a/category/uncategorized.html +++ b/category/uncategorized.html @@ -53,7 +53,7 @@ - + diff --git a/category/weekly-updates.html b/category/weekly-updates.html index 6b85b2dc38..7ade030a83 100644 --- a/category/weekly-updates.html +++ b/category/weekly-updates.html @@ -53,7 +53,7 @@ - + diff --git a/client-go/index.html b/client-go/index.html index 677ff7dd41..e3351f5780 100644 --- a/client-go/index.html +++ b/client-go/index.html @@ -53,7 +53,7 @@ - + diff --git a/cloud-provider-kubevirt/index.html b/cloud-provider-kubevirt/index.html index 087e9323ab..4ccfc223a3 100644 --- a/cloud-provider-kubevirt/index.html +++ b/cloud-provider-kubevirt/index.html @@ -53,7 +53,7 @@ - + diff --git a/cluster-api-provider-external/index.html b/cluster-api-provider-external/index.html index 49c6187931..77820d1ee0 100644 --- a/cluster-api-provider-external/index.html +++ b/cluster-api-provider-external/index.html @@ -53,7 +53,7 @@ - + diff --git a/community/index.html b/community/index.html index c483cca5e6..7b85d81d30 100644 --- a/community/index.html +++ b/community/index.html @@ -53,7 +53,7 @@ - + diff --git a/containerized-data-importer/index.html b/containerized-data-importer/index.html index 152657e199..6b4a91b472 100644 --- a/containerized-data-importer/index.html +++ b/containerized-data-importer/index.html @@ -53,7 +53,7 @@ - + diff --git a/controller-lifecycle-operator-sdk/index.html b/controller-lifecycle-operator-sdk/index.html index 7e5aa15479..88ed4fc472 100644 --- a/controller-lifecycle-operator-sdk/index.html +++ b/controller-lifecycle-operator-sdk/index.html @@ -53,7 +53,7 @@ - + diff --git a/cpu-nfd-plugin/index.html b/cpu-nfd-plugin/index.html index 0580e133a6..14081974ab 100644 --- a/cpu-nfd-plugin/index.html +++ b/cpu-nfd-plugin/index.html @@ -53,7 +53,7 @@ - + diff --git a/docs/index.html b/docs/index.html index f98ca23b5d..cdc63cf59c 100644 --- a/docs/index.html +++ b/docs/index.html @@ -53,7 +53,7 @@ - + diff --git a/feed.xml b/feed.xml index 4f7f482da8..a82dc9c241 100644 --- a/feed.xml +++ b/feed.xml @@ -1,4 +1,4 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed.xmlKubeVirt.ioVirtual Machine Management on KubernetesKubeVirt v1.3.02024-07-17T00:00:00+00:002024-07-17T00:00:00+00:00https://kubevirt.io//2024/changelog-v1.3.0v1.3.0 +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed.xmlKubeVirt.ioVirtual Machine Management on KubernetesKubeVirt v1.3.02024-07-17T00:00:00+00:002024-07-17T00:00:00+00:00https://kubevirt.io//2024/changelog-v1.3.0v1.3.0

    Released on: Wed Jul 17 15:09:44 2024 +0000

    @@ -693,7 +693,7 @@ kind: VirtualMachine metadata: name: testvm spec: - running: true + runStrategy: Always template: spec: domain: @@ -1270,7 +1270,7 @@ a subnet):

    kubevirt.io/vm: vm1 name: vm1 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1324,7 +1324,7 @@ a subnet):

    kubevirt.io/vm: vm2 name: vm2 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1378,7 +1378,7 @@ a subnet):

    kubevirt.io/vm: vm3 name: vm3 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1432,7 +1432,7 @@ a subnet):

    kubevirt.io/vm: vm4 name: vm4 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1486,7 +1486,7 @@ a subnet):

    kubevirt.io/vm: vm5 name: vm5 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1540,7 +1540,7 @@ a subnet):

    kubevirt.io/vm: vm6 name: vm6 spec: - running: true + runStrategy: Always template: metadata: labels: diff --git a/feed/community.xml b/feed/community.xml index 3711649dd2..de0e7dfc29 100644 --- a/feed/community.xml +++ b/feed/community.xml @@ -1 +1 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed/community.xmlKubeVirt.io | CommunityVirtual Machine Management on Kubernetes \ No newline at end of file +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed/community.xmlKubeVirt.io | CommunityVirtual Machine Management on Kubernetes \ No newline at end of file diff --git a/feed/news.xml b/feed/news.xml index 07ab18b155..011f09475e 100644 --- a/feed/news.xml +++ b/feed/news.xml @@ -1,4 +1,4 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed/news.xmlKubeVirt.io | NewsVirtual Machine Management on KubernetesKubeVirt Summit 2024 CfP is open!2024-03-19T00:00:00+00:002024-03-19T00:00:00+00:00https://kubevirt.io//2024/KubeVirt-Summit-2024-CfPWe are very pleased to announce the details for this year’s KubeVirt Summit!!

    +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed/news.xmlKubeVirt.io | NewsVirtual Machine Management on KubernetesKubeVirt Summit 2024 CfP is open!2024-03-19T00:00:00+00:002024-03-19T00:00:00+00:00https://kubevirt.io//2024/KubeVirt-Summit-2024-CfPWe are very pleased to announce the details for this year’s KubeVirt Summit!!

    What is KubeVirt Summit?

    @@ -423,7 +423,7 @@ kind: VirtualMachine metadata: name: testvm spec: - running: true + runStrategy: Always template: spec: domain: @@ -1000,7 +1000,7 @@ a subnet):

    kubevirt.io/vm: vm1 name: vm1 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1054,7 +1054,7 @@ a subnet):

    kubevirt.io/vm: vm2 name: vm2 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1108,7 +1108,7 @@ a subnet):

    kubevirt.io/vm: vm3 name: vm3 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1162,7 +1162,7 @@ a subnet):

    kubevirt.io/vm: vm4 name: vm4 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1216,7 +1216,7 @@ a subnet):

    kubevirt.io/vm: vm5 name: vm5 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1270,7 +1270,7 @@ a subnet):

    kubevirt.io/vm: vm6 name: vm6 spec: - running: true + runStrategy: Always template: metadata: labels: @@ -1667,7 +1667,7 @@ preventing OVN-Kubernetes from assigning that IP address to the workloads.

    metadata: name: vm-server spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -1715,7 +1715,7 @@ preventing OVN-Kubernetes from assigning that IP address to the workloads.

    metadata: name: vm-client spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -1905,7 +1905,7 @@ networks share the same OVS bridge, each on a different VLAN).

    metadata: name: vm-red-1 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -1953,7 +1953,7 @@ networks share the same OVS bridge, each on a different VLAN).

    metadata: name: vm-red-2 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -2001,7 +2001,7 @@ networks share the same OVS bridge, each on a different VLAN).

    metadata: name: vm-blue-1 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -2049,7 +2049,7 @@ networks share the same OVS bridge, each on a different VLAN).

    metadata: name: vm-blue-2 spec: - running: true + runStrategy: Always template: spec: nodeSelector: @@ -2309,7 +2309,7 @@ kind: VirtualMachine metadata: name: vm-server spec: - running: true + runStrategy: Always template: spec: domain: @@ -2359,7 +2359,7 @@ kind: VirtualMachine metadata: name: vm-client spec: - running: true + runStrategy: Always template: spec: domain: diff --git a/feed/releases.xml b/feed/releases.xml index 384b22fa24..53b7732d54 100644 --- a/feed/releases.xml +++ b/feed/releases.xml @@ -1,4 +1,4 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed/releases.xmlKubeVirt.io | ReleasesVirtual Machine Management on KubernetesKubeVirt v1.3.02024-07-17T00:00:00+00:002024-07-17T00:00:00+00:00https://kubevirt.io//2024/changelog-v1.3.0v1.3.0 +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed/releases.xmlKubeVirt.io | ReleasesVirtual Machine Management on KubernetesKubeVirt v1.3.02024-07-17T00:00:00+00:002024-07-17T00:00:00+00:00https://kubevirt.io//2024/changelog-v1.3.0v1.3.0

    Released on: Wed Jul 17 15:09:44 2024 +0000

    diff --git a/feed/uncategorized.xml b/feed/uncategorized.xml index cfeecbf215..922ab3bd41 100644 --- a/feed/uncategorized.xml +++ b/feed/uncategorized.xml @@ -1,4 +1,4 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed/uncategorized.xmlKubeVirt.io | UncategorizedVirtual Machine Management on KubernetesMonitoring KubeVirt VMs from the inside2020-12-10T00:00:00+00:002020-12-10T00:00:00+00:00https://kubevirt.io//2020/Monitoring-KubeVirt-VMs-from-the-insideMonitoring KubeVirt VMs from the inside +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed/uncategorized.xmlKubeVirt.io | UncategorizedVirtual Machine Management on KubernetesMonitoring KubeVirt VMs from the inside2020-12-10T00:00:00+00:002020-12-10T00:00:00+00:00https://kubevirt.io//2020/Monitoring-KubeVirt-VMs-from-the-insideMonitoring KubeVirt VMs from the inside

    This blog post will guide you on how to monitor KubeVirt Linux based VirtualMachines with Prometheus node-exporter. Since node_exporter will run inside the VM and expose metrics at an HTTP endpoint, you can use this same guide to expose custom applications that expose metrics in the Prometheus format.

    @@ -120,7 +120,7 @@ kubectl rollout status -n cdi deployment cdi-deployment metadata: name: monitorable-vm spec: - running: true + runStrategy: Always template: metadata: name: monitorable-vm @@ -2511,19 +2511,19 @@ metadata:

    What is Running in OfflineVirtualMachine?

    -

    .spec.running controls whether the associated VirtualMachine object is created. In other words this changes the power status of the virtual machine.

    +

    .spec.runStrategy controls whether and when the associated VirtualMachineInstance object is created. In other words this controls the power status of the virtual machine.

    -
      running: true
    +
      runStrategy: Always
     
    -

    This will create a VirtualMachine object which will instantiate and power on a virtual machine.

    +

    This will create a VirtualMachineInstance object which will instantiate and power on a virtual machine.

    -
    kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"running":true }}' -n nodejs-ex
    +
    kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"runStrategy": "Always"}}' -n nodejs-ex
     
    -

    This will delete the VirtualMachine object which will power off the virtual machine.

    +

    This will delete the VirtualMachineInstance object which will power off the virtual machine.

    -
    kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"running":false }}' -n nodejs-ex
    +
    kubectl patch offlinevirtualmachine mongodb --type merge -p '{"spec":{"runStrategy": "Halted"}}' -n nodejs-ex
     

    And if you would rather not have to remember the kubectl patch command above @@ -3036,7 +3036,7 @@ Note the PVC containing the cirros image must be listed as the first disk under kubevirt.io/ovm: cirros name: cirros spec: - running: false + runStrategy: Halted template: metadata: creationTimestamp: null diff --git a/feed/updates.xml b/feed/updates.xml index c8f3fc27d4..6b8342c18a 100644 --- a/feed/updates.xml +++ b/feed/updates.xml @@ -1,4 +1,4 @@ -Jekyll2024-10-15T10:12:51+00:00https://kubevirt.io//feed/updates.xmlKubeVirt.io | UpdatesVirtual Machine Management on KubernetesThis Week In Kube Virt 232018-04-27T00:00:00+00:002018-04-27T00:00:00+00:00https://kubevirt.io//2018/This-Week-in-Kube-Virt-23This is a close-to weekly update from the KubeVirt team.

    +Jekyll2024-10-30T13:11:13+00:00https://kubevirt.io//feed/updates.xmlKubeVirt.io | UpdatesVirtual Machine Management on KubernetesThis Week In Kube Virt 232018-04-27T00:00:00+00:002018-04-27T00:00:00+00:00https://kubevirt.io//2018/This-Week-in-Kube-Virt-23This is a close-to weekly update from the KubeVirt team.

    In general there is now more work happening outside of the core kubevirt repository.

    diff --git a/gallery/index.html b/gallery/index.html index 2911a8bc7a..97504129fb 100644 --- a/gallery/index.html +++ b/gallery/index.html @@ -53,7 +53,7 @@ - + diff --git a/hostpath-provisioner-operator/index.html b/hostpath-provisioner-operator/index.html index d83bf57c4a..25576f9cf9 100644 --- a/hostpath-provisioner-operator/index.html +++ b/hostpath-provisioner-operator/index.html @@ -53,7 +53,7 @@ - + diff --git a/hostpath-provisioner/index.html b/hostpath-provisioner/index.html index abe2467089..f0c4774250 100644 --- a/hostpath-provisioner/index.html +++ b/hostpath-provisioner/index.html @@ -53,7 +53,7 @@ - + diff --git a/index.html b/index.html index 1526abb877..6e85491f22 100644 --- a/index.html +++ b/index.html @@ -53,7 +53,7 @@ - + diff --git a/kubevirt/index.html b/kubevirt/index.html index f780c54120..8093ecc483 100644 --- a/kubevirt/index.html +++ b/kubevirt/index.html @@ -53,7 +53,7 @@ - + diff --git a/labs/index.html b/labs/index.html index 823b7b9264..f94a0b53a5 100644 --- a/labs/index.html +++ b/labs/index.html @@ -53,7 +53,7 @@ - + diff --git a/labs/kubernetes/lab1.html b/labs/kubernetes/lab1.html index 3b47a58558..2951f82f3c 100644 --- a/labs/kubernetes/lab1.html +++ b/labs/kubernetes/lab1.html @@ -53,7 +53,7 @@ - + @@ -497,11 +497,11 @@

    Manage Virtual Machines (optional):
    # Start the virtual machine:
     kubectl patch virtualmachine testvm --type merge -p \
    -    '{"spec":{"running":true}}'
    +    '{"spec":{"runStrategy": "Always"}}'
     
     # Stop the virtual machine:
     kubectl patch virtualmachine testvm --type merge -p \
    -    '{"spec":{"running":false}}'
    +    '{"spec":{"runStrategy": "Halted"}}'
     

    Now that the Virtual Machine has been started, check the status (kubectl get vms). Note the Running status.

    diff --git a/labs/kubernetes/lab2.html b/labs/kubernetes/lab2.html index 3dfaa1cddc..892be7c3d6 100644 --- a/labs/kubernetes/lab2.html +++ b/labs/kubernetes/lab2.html @@ -53,7 +53,7 @@ - + diff --git a/labs/kubernetes/lab3.html b/labs/kubernetes/lab3.html index 0c4624222f..df0ba0ce80 100644 --- a/labs/kubernetes/lab3.html +++ b/labs/kubernetes/lab3.html @@ -53,7 +53,7 @@ - + diff --git a/labs/kubernetes/migration.html b/labs/kubernetes/migration.html index 3eee484c89..e7077c6491 100644 --- a/labs/kubernetes/migration.html +++ b/labs/kubernetes/migration.html @@ -53,7 +53,7 @@ - + diff --git a/labs/manifests/vm.yaml b/labs/manifests/vm.yaml index 66c23f2cd1..e3da3fb20e 100644 --- a/labs/manifests/vm.yaml +++ b/labs/manifests/vm.yaml @@ -3,7 +3,7 @@ kind: VirtualMachine metadata: name: testvm spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/labs/manifests/vm1_pvc.yml b/labs/manifests/vm1_pvc.yml index 64c283307c..f37ecd37e9 100644 --- a/labs/manifests/vm1_pvc.yml +++ b/labs/manifests/vm1_pvc.yml @@ -7,7 +7,7 @@ metadata: kubevirt.io/os: linux name: vm1 spec: - running: true + runStrategy: Always template: metadata: creationTimestamp: null diff --git a/machine-remediation/index.html b/machine-remediation/index.html index 8dce16e86d..88df2b1c37 100644 --- a/machine-remediation/index.html +++ b/machine-remediation/index.html @@ -53,7 +53,7 @@ - + diff --git a/managed-tenant-quota/index.html b/managed-tenant-quota/index.html index 7ccae37202..2433e877ba 100644 --- a/managed-tenant-quota/index.html +++ b/managed-tenant-quota/index.html @@ -53,7 +53,7 @@ - + diff --git a/node-maintenance-operator/index.html b/node-maintenance-operator/index.html index 1b2a84966e..a1e8d0f64c 100644 --- a/node-maintenance-operator/index.html +++ b/node-maintenance-operator/index.html @@ -53,7 +53,7 @@ - + diff --git a/privacy/index.html b/privacy/index.html index cf3e37ed0d..916b60c516 100644 --- a/privacy/index.html +++ b/privacy/index.html @@ -53,7 +53,7 @@ - + diff --git a/qe-tools/index.html b/qe-tools/index.html index 0f65d82b95..09de53cadf 100644 --- a/qe-tools/index.html +++ b/qe-tools/index.html @@ -53,7 +53,7 @@ - + diff --git a/quickstart_cloud/index.html b/quickstart_cloud/index.html index 4d06e7bf9d..25f13205ff 100644 --- a/quickstart_cloud/index.html +++ b/quickstart_cloud/index.html @@ -53,7 +53,7 @@ - + diff --git a/quickstart_kind/index.html b/quickstart_kind/index.html index 69b8699e7c..adf8410367 100644 --- a/quickstart_kind/index.html +++ b/quickstart_kind/index.html @@ -53,7 +53,7 @@ - + diff --git a/quickstart_minikube/index.html b/quickstart_minikube/index.html index 2d952efe15..dc594b11a3 100644 --- a/quickstart_minikube/index.html +++ b/quickstart_minikube/index.html @@ -53,7 +53,7 @@ - + diff --git a/search.html b/search.html index 34fce5c56e..83c0bb1f51 100644 --- a/search.html +++ b/search.html @@ -53,7 +53,7 @@ - + @@ -349,7 +349,7 @@

    "title": "Running KubeVirt with Cluster Autoscaler", "author" : "Mark Maglana, Jonathan Kinred, Paul Myjavec", "tags" : "Kubevirt, kubernetes, virtual machine, VM, Cluster Autoscaler, AWS, EKS", - "body": "Introduction: For this article, we’ll learn about the process of setting upKubeVirt with ClusterAutoscaleron EKS. In addition, we’ll be using bare metal nodes to host KubeVirt VMs. Required Base Knowledge: This article will talk about how to make various software systems work togetherbut introducing each one in detail is outside of its scope. Thus, you must already: Know how to administer a Kubernetes cluster; Be familiar with AWS, specifically IAM and EKS; and Have some experience with KubeVirt. Companion Code: All the code used in this article may also be found atgithub. com/relaxdiego/kubevirt-cas-baremetal. Set Up the Cluster: Shared environment variables: First let’s set some environment variables: # The name of the EKS cluster we're going to createexport RD_CLUSTER_NAME=my-cluster# The region where we will create the clusterexport RD_REGION=us-west-2# Kubernetes version to useexport RD_K8S_VERSION=1. 27# The name of the keypair that we're going to inject into the nodes. You# must create this ahead of time in the correct region. export RD_EC2_KEYPAIR_NAME=eks-my-clusterPrepare the cluster. yaml file: Using eksctl, prepare an EKS cluster config: eksctl create cluster \ --dry-run \ --name=${RD_CLUSTER_NAME} \ --nodegroup-name ng-infra \ --node-type m5. xlarge \ --nodes 2 \ --nodes-min 2 \ --nodes-max 2 \ --node-labels workload=infra \ --region=${RD_REGION} \ --ssh-access \ --ssh-public-key ${RD_EC2_KEYPAIR_NAME} \ --version ${RD_K8S_VERSION} \ --vpc-nat-mode HighlyAvailable \ --with-oidc \> cluster. yaml--dry-run means the command will not actually create the cluster but willinstead output a config to stdout which we then write to cluster. yaml. Open the file and look at what it has produced. For more info on the schema used by cluster. yaml, see the Config fileschema page from eksctl. io This cluster will start out with a node group that we will use to host our“infra” services. This is why we are using the cheaper m5. xlarge rather thana baremetal instance type. However, we also need to ensure that none of our VMswill ever be scheduled in these nodes. Thus we need to taint them. In thegenerated cluster. yaml file, append the following taint to the only nodegroup in the managedNodeGroups list: managedNodeGroups:- amiFamily: AmazonLinux2 . . . taints: - key: CriticalAddonsOnly effect: NoScheduleCreate the cluster: We can now create the cluster: eksctl create cluster --config-file cluster. yamlExample output: 2023-08-20 07:59:14 [ℹ] eksctl version . . . 2023-08-20 07:59:14 [ℹ] using region us-west-2 . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2a . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2b . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2c . . . . . . 2023-08-20 08:14:06 [ℹ] kubectl command should work with . . . 2023-08-20 08:14:06 [✔] EKS cluster my-cluster in us-west-2 is readyOnce the command is done, you should be able to query the the kube API. Forexample: kubectl get nodesExample output: NAME STATUS ROLES AGE VERSIONip-XXX. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532ip-YYY. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532Create the Node Groups: As per this section of the Cluster Autoscalerdocs: If you’re using Persistent Volumes, your deployment needs to run in the sameAZ as where the EBS volume is, otherwise the pod scheduling could fail if itis scheduled in a different AZ and cannot find the EBS volume. To overcomethis, either use a single AZ ASG for this use case, or an ASG-per-AZ whileenabling --balance-similar-node-groups. Based on the above, we will create a node group for each of the availabilityzones (AZs) that was declared in cluster. yaml so that the Cluster Autoscaler willalways bring up a node in the AZ where a VM’s EBS-backed PV is located. To do that, we will first prepare a template that we can then feed toenvsubst. Save the following in node-group. yaml. template: ---# See: Config File Schema <https://eksctl. io/usage/schema/>apiVersion: eksctl. io/v1alpha5kind: ClusterConfigmetadata: name: ${RD_CLUSTER_NAME} region: ${RD_REGION}managedNodeGroups: - name: ng-${EKS_AZ}-c5-metal amiFamily: AmazonLinux2 instanceType: c5. metal availabilityZones: - ${EKS_AZ} desiredCapacity: 1 maxSize: 3 minSize: 0 labels: alpha. eksctl. io/cluster-name: my-cluster alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal workload: vm privateNetworking: false ssh: allow: true publicKeyPath: ${RD_EC2_KEYPAIR_NAME} volumeSize: 500 volumeIOPS: 10000 volumeThroughput: 750 volumeType: gp3 propagateASGTags: true tags: alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal alpha. eksctl. io/nodegroup-type: managed k8s. io/cluster-autoscaler/my-cluster: owned k8s. io/cluster-autoscaler/enabled: true # The following tags help CAS determine that this node group is able # to satisfy the label and resource requirements of the KubeVirt VMs. # See: https://github. com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README. md#auto-discovery-setup k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50M k8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true The last few tags bears additional emphasis. They are required because when avirtual machine is created, it will have the following requirements: requests: devices. kubevirt. io/kvm: 1 devices. kubevirt. io/tun: 1 devices. kubevirt. io/vhost-net: 1 ephemeral-storage: 50MnodeSelectors: kubevirt. io/schedulable=trueHowever, at least when scaling from zero for the first time, CAS will have noknowledge of this information unless the correct AWS tags are added to the nodegroup. This is why we have the following added to the managed node group’stags: k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50Mk8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true For more information on these tags, see Auto-DiscoverySetup. Create the VM Node Groups: We can now create the node group: yq . availabilityZones[] cluster. yaml -r | \ xargs -I{} bash -c export EKS_AZ={}; envsubst < node-group. yaml. template | \ eksctl create nodegroup --config-file - Deploy KubeVirt: The following was adapted from KubeVirt quickstart with cloudproviders. Deploy the KubeVirt operator: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/v1. 0. 0/kubevirt-operator. yamlSo that the operator will know how to deploy KubeVirt, let’s add the KubeVirtresource: cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: KubeVirtmetadata: name: kubevirt namespace: kubevirtspec: certificateRotateStrategy: {} configuration: developerConfiguration: featureGates: [] customizeComponents: {} imagePullPolicy: IfNotPresent workloadUpdateStrategy: {} infra: nodePlacement: nodeSelector: workload: infra tolerations: - key: CriticalAddonsOnly operator: ExistsEOF Notice how we are specifically configuring KubeVirt itself to tolerate theCriticalAddonsOnly taint. This is so that the KubeVirt services themselvescan be scheduled in the infra nodes instead of the bare metal nodes which wewant to scale down to zero when there are no VMs. Wait until KubeVirt is in a Deployed state: kubectl get -n kubevirt -o=jsonpath= {. status. phase} \ kubevirt. kubevirt. io/kubevirtExample output: DeployedDouble check that all KubeVirt components are healthy: kubectl get pods -n kubevirtExample output: NAME READY STATUS RESTARTS AGEpod/virt-api-674467958c-5chhj 1/1 Running 0 98dpod/virt-api-674467958c-wzcmk 1/1 Running 0 5dpod/virt-controller-6768977b-49wwb 1/1 Running 0 98dpod/virt-controller-6768977b-6pfcm 1/1 Running 0 5dpod/virt-handler-4hztq 1/1 Running 0 5dpod/virt-handler-x98x5 1/1 Running 0 98dpod/virt-operator-85f65df79b-lg8xb 1/1 Running 0 5dpod/virt-operator-85f65df79b-rp8p5 1/1 Running 0 98dDeploy a VM to test: The following is copied fromkubevirt. io. First create a secret from your public key: kubectl create secret generic my-pub-key --from-file=key1=~/. ssh/id_rsa. pubNext, create the VM: # Create a VM referencing the Secret using propagation method configDrivecat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: testvmspec: running: true template: spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk rng: {} resources: requests: memory: 1024M terminationGracePeriodSeconds: 0 accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key propagationMethod: configDrive: {} volumes: - containerDisk: image: quay. io/containerdisks/fedora:latest name: containerdisk - cloudInitConfigDrive: userData: |- #cloud-config password: fedora chpasswd: { expire: False } name: cloudinitdiskEOFCheck that the test VM is running: kubectl get vmExample output: NAME AGE STATUS READYtestvm 30s Running TrueDelete the VM: kubectl delete testvmSet Up Cluster Autoscaler: Prepare the permissions for Cluster Autoscaler: So that CAS can set the desired capacity of each node group dynamically, wemust grant it limited access to certain AWS resources. The first step to thisis to define the IAM policy. This section is based off of the “Create an IAM policy and role” section ofthe AWSAutoscalingdocumentation. Create the cluster-specific policy document: Prepare the policy document by rendering the following file. cat > policy. json <<EOF{ Version : 2012-10-17 , Statement : [ { Sid : VisualEditor0 , Effect : Allow , Action : [ autoscaling:SetDesiredCapacity , autoscaling:TerminateInstanceInAutoScalingGroup ], Resource : * }, { Sid : VisualEditor1 , Effect : Allow , Action : [ autoscaling:DescribeAutoScalingInstances , autoscaling:DescribeAutoScalingGroups , ec2:DescribeLaunchTemplateVersions , autoscaling:DescribeTags , autoscaling:DescribeLaunchConfigurations , ec2:DescribeInstanceTypes ], Resource : * } ]}EOFThe above should be enough for CAS to do its job. Next, create the policy: aws iam create-policy \ --policy-name eks-${RD_REGION}-${RD_CLUSTER_NAME}-ClusterAutoscalerPolicy \ --policy-document file://policy. json IMPORTANT: Take note of the returned policy ARN. You will need that below. Create the IAM role and k8s service account pair: The Cluster Autoscaler needs a service account in the k8s cluster that’sassociated with an IAM role that consumes the policy document we created in theprevious section. This is normally a two-step process but can be created in asingle command using eksctl: For more information on what eksctl is doing under the covers, see How ItWorks from theeksctl documentation for IAM Roles for Service Accounts. export RD_POLICY_ARN= <Get this value from the last command's output> eksctl create iamserviceaccount \ --cluster=${RD_CLUSTER_NAME} \ --region=${RD_REGION} \ --namespace=kube-system \ --name=cluster-autoscaler \ --attach-policy-arn=${RD_POLICY_ARN} \ --override-existing-serviceaccounts \ --approveDouble check that the cluster-autoscaler service account has been correctlyannotated with the IAM role that was created by eksctl in the same step: kubectl get sa cluster-autoscaler -n kube-system -ojson | \ jq -r '. metadata. annotations | . eks. amazonaws. com/role-arn 'Example output: arn:aws:iam::365499461711:role/eksctl-my-cluster-addon-iamserviceaccount-. . . Check from the AWS Console if the above role contains the policy that we createdearlier. Deploy Cluster Autoscaler: First, find the most recent Cluster Autoscaler version that has the same MAJORand MINOR version as the kubernetes cluster you’re deploying to. Get the kube cluster’s version: kubectl version -ojson | jq -r . serverVersion. gitVersionExample output: v1. 27. 4-eks-2d98532Choose the appropriate version for CAS. You can get the latest ClusterAutoscaler versions from its Github ReleasesPage. Example: export CLUSTER_AUTOSCALER_VERSION=1. 27. 3Next, deploy the cluster autoscaler using the deployment template that Iprepared in the companionrepo envsubst < <(curl https://raw. githubusercontent. com/relaxdiego/kubevirt-cas-baremetal/main/cas-deployment. yaml. template) | \ kubectl apply -f -Check the cluster autoscaler status: kubectl get deploy,pod -l app=cluster-autoscaler -n kube-systemExample output: NAME READY UP-TO-DATE AVAILABLE AGEdeployment. apps/cluster-autoscaler 1/1 1 1 4m1sNAME READY STATUS RESTARTS AGEpod/cluster-autoscaler-6c58bd6d89-v8wbn 1/1 Running 0 60sTail the cluster-autoscaler pod’s logs to see what’s happening: kubectl -n kube-system logs -f deployment. apps/cluster-autoscalerBelow are example log entries from Cluster Autoscaler terminating an unneedednode: node ip-XXXX. YYYY. compute. internal may be removed. . . ip-XXXX. YYYY. compute. internal was unneeded for 1m3. 743475455sOnce the timeout has been reached (default: 10 minutes), CAS will scale downthe group: Scale-down: removing empty node ip-XXXX. YYYY. compute. internalEvent(v1. ObjectReference{Kind: ConfigMap , Namespace: kube-system , . . . Successfully added ToBeDeletedTaint on node ip-XXXX. YYYY. compute. internalTerminating EC2 instance: i-ZZZZDeleteInstances was called: . . . For more information on how Cluster Autoscaler scales down a node group, seeHow does scale-downwork?from the project’s FAQ. When you try to get the list of nodes, you should see the bare metal nodestainted such that they are no longer schedulable: NAME STATUS ROLES AGE VERSIONip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adIn a few more minutes, the nodes will be deleted. To try the scale up, just deploy a VM. Expanding Node Group eks-ng-eacf8ebb . . . Best option to resize: eks-ng-eacf8ebbEstimated 1 nodes needed in eks-ng-eacf8ebbFinal scale-up plan: [{eks-ng-eacf8ebb 0->1 (max: 3)}]Scale-up: setting group eks-ng-eacf8ebb size to 1Setting asg eks-ng-eacf8ebb size to 1Done: At this point you should have a working, auto-scaling EKS cluster that can hostVMs on bare metal nodes. If you have any questions, ask themhere. References: Amazon EKS Autoscaling Cluster Autoscaler in Plain English AWS EKS Best Practices Guide IAM roles for service accounts eksctl create iamserviceaccount" + "body": "Introduction: For this article, we’ll learn about the process of setting upKubeVirt with ClusterAutoscaleron EKS. In addition, we’ll be using bare metal nodes to host KubeVirt VMs. Required Base Knowledge: This article will talk about how to make various software systems work togetherbut introducing each one in detail is outside of its scope. Thus, you must already: Know how to administer a Kubernetes cluster; Be familiar with AWS, specifically IAM and EKS; and Have some experience with KubeVirt. Companion Code: All the code used in this article may also be found atgithub. com/relaxdiego/kubevirt-cas-baremetal. Set Up the Cluster: Shared environment variables: First let’s set some environment variables: # The name of the EKS cluster we're going to createexport RD_CLUSTER_NAME=my-cluster# The region where we will create the clusterexport RD_REGION=us-west-2# Kubernetes version to useexport RD_K8S_VERSION=1. 27# The name of the keypair that we're going to inject into the nodes. You# must create this ahead of time in the correct region. export RD_EC2_KEYPAIR_NAME=eks-my-clusterPrepare the cluster. yaml file: Using eksctl, prepare an EKS cluster config: eksctl create cluster \ --dry-run \ --name=${RD_CLUSTER_NAME} \ --nodegroup-name ng-infra \ --node-type m5. xlarge \ --nodes 2 \ --nodes-min 2 \ --nodes-max 2 \ --node-labels workload=infra \ --region=${RD_REGION} \ --ssh-access \ --ssh-public-key ${RD_EC2_KEYPAIR_NAME} \ --version ${RD_K8S_VERSION} \ --vpc-nat-mode HighlyAvailable \ --with-oidc \> cluster. yaml--dry-run means the command will not actually create the cluster but willinstead output a config to stdout which we then write to cluster. yaml. Open the file and look at what it has produced. For more info on the schema used by cluster. yaml, see the Config fileschema page from eksctl. io This cluster will start out with a node group that we will use to host our“infra” services. This is why we are using the cheaper m5. xlarge rather thana baremetal instance type. However, we also need to ensure that none of our VMswill ever be scheduled in these nodes. Thus we need to taint them. In thegenerated cluster. yaml file, append the following taint to the only nodegroup in the managedNodeGroups list: managedNodeGroups:- amiFamily: AmazonLinux2 . . . taints: - key: CriticalAddonsOnly effect: NoScheduleCreate the cluster: We can now create the cluster: eksctl create cluster --config-file cluster. yamlExample output: 2023-08-20 07:59:14 [ℹ] eksctl version . . . 2023-08-20 07:59:14 [ℹ] using region us-west-2 . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2a . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2b . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2c . . . . . . 2023-08-20 08:14:06 [ℹ] kubectl command should work with . . . 2023-08-20 08:14:06 [✔] EKS cluster my-cluster in us-west-2 is readyOnce the command is done, you should be able to query the the kube API. Forexample: kubectl get nodesExample output: NAME STATUS ROLES AGE VERSIONip-XXX. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532ip-YYY. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532Create the Node Groups: As per this section of the Cluster Autoscalerdocs: If you’re using Persistent Volumes, your deployment needs to run in the sameAZ as where the EBS volume is, otherwise the pod scheduling could fail if itis scheduled in a different AZ and cannot find the EBS volume. To overcomethis, either use a single AZ ASG for this use case, or an ASG-per-AZ whileenabling --balance-similar-node-groups. Based on the above, we will create a node group for each of the availabilityzones (AZs) that was declared in cluster. yaml so that the Cluster Autoscaler willalways bring up a node in the AZ where a VM’s EBS-backed PV is located. To do that, we will first prepare a template that we can then feed toenvsubst. Save the following in node-group. yaml. template: ---# See: Config File Schema <https://eksctl. io/usage/schema/>apiVersion: eksctl. io/v1alpha5kind: ClusterConfigmetadata: name: ${RD_CLUSTER_NAME} region: ${RD_REGION}managedNodeGroups: - name: ng-${EKS_AZ}-c5-metal amiFamily: AmazonLinux2 instanceType: c5. metal availabilityZones: - ${EKS_AZ} desiredCapacity: 1 maxSize: 3 minSize: 0 labels: alpha. eksctl. io/cluster-name: my-cluster alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal workload: vm privateNetworking: false ssh: allow: true publicKeyPath: ${RD_EC2_KEYPAIR_NAME} volumeSize: 500 volumeIOPS: 10000 volumeThroughput: 750 volumeType: gp3 propagateASGTags: true tags: alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal alpha. eksctl. io/nodegroup-type: managed k8s. io/cluster-autoscaler/my-cluster: owned k8s. io/cluster-autoscaler/enabled: true # The following tags help CAS determine that this node group is able # to satisfy the label and resource requirements of the KubeVirt VMs. # See: https://github. com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README. md#auto-discovery-setup k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50M k8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true The last few tags bears additional emphasis. They are required because when avirtual machine is created, it will have the following requirements: requests: devices. kubevirt. io/kvm: 1 devices. kubevirt. io/tun: 1 devices. kubevirt. io/vhost-net: 1 ephemeral-storage: 50MnodeSelectors: kubevirt. io/schedulable=trueHowever, at least when scaling from zero for the first time, CAS will have noknowledge of this information unless the correct AWS tags are added to the nodegroup. This is why we have the following added to the managed node group’stags: k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50Mk8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true For more information on these tags, see Auto-DiscoverySetup. Create the VM Node Groups: We can now create the node group: yq . availabilityZones[] cluster. yaml -r | \ xargs -I{} bash -c export EKS_AZ={}; envsubst < node-group. yaml. template | \ eksctl create nodegroup --config-file - Deploy KubeVirt: The following was adapted from KubeVirt quickstart with cloudproviders. Deploy the KubeVirt operator: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/v1. 0. 0/kubevirt-operator. yamlSo that the operator will know how to deploy KubeVirt, let’s add the KubeVirtresource: cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: KubeVirtmetadata: name: kubevirt namespace: kubevirtspec: certificateRotateStrategy: {} configuration: developerConfiguration: featureGates: [] customizeComponents: {} imagePullPolicy: IfNotPresent workloadUpdateStrategy: {} infra: nodePlacement: nodeSelector: workload: infra tolerations: - key: CriticalAddonsOnly operator: ExistsEOF Notice how we are specifically configuring KubeVirt itself to tolerate theCriticalAddonsOnly taint. This is so that the KubeVirt services themselvescan be scheduled in the infra nodes instead of the bare metal nodes which wewant to scale down to zero when there are no VMs. Wait until KubeVirt is in a Deployed state: kubectl get -n kubevirt -o=jsonpath= {. status. phase} \ kubevirt. kubevirt. io/kubevirtExample output: DeployedDouble check that all KubeVirt components are healthy: kubectl get pods -n kubevirtExample output: NAME READY STATUS RESTARTS AGEpod/virt-api-674467958c-5chhj 1/1 Running 0 98dpod/virt-api-674467958c-wzcmk 1/1 Running 0 5dpod/virt-controller-6768977b-49wwb 1/1 Running 0 98dpod/virt-controller-6768977b-6pfcm 1/1 Running 0 5dpod/virt-handler-4hztq 1/1 Running 0 5dpod/virt-handler-x98x5 1/1 Running 0 98dpod/virt-operator-85f65df79b-lg8xb 1/1 Running 0 5dpod/virt-operator-85f65df79b-rp8p5 1/1 Running 0 98dDeploy a VM to test: The following is copied fromkubevirt. io. First create a secret from your public key: kubectl create secret generic my-pub-key --from-file=key1=~/. ssh/id_rsa. pubNext, create the VM: # Create a VM referencing the Secret using propagation method configDrivecat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: testvmspec: runStrategy: Always template: spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk rng: {} resources: requests: memory: 1024M terminationGracePeriodSeconds: 0 accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key propagationMethod: configDrive: {} volumes: - containerDisk: image: quay. io/containerdisks/fedora:latest name: containerdisk - cloudInitConfigDrive: userData: |- #cloud-config password: fedora chpasswd: { expire: False } name: cloudinitdiskEOFCheck that the test VM is running: kubectl get vmExample output: NAME AGE STATUS READYtestvm 30s Running TrueDelete the VM: kubectl delete testvmSet Up Cluster Autoscaler: Prepare the permissions for Cluster Autoscaler: So that CAS can set the desired capacity of each node group dynamically, wemust grant it limited access to certain AWS resources. The first step to thisis to define the IAM policy. This section is based off of the “Create an IAM policy and role” section ofthe AWSAutoscalingdocumentation. Create the cluster-specific policy document: Prepare the policy document by rendering the following file. cat > policy. json <<EOF{ Version : 2012-10-17 , Statement : [ { Sid : VisualEditor0 , Effect : Allow , Action : [ autoscaling:SetDesiredCapacity , autoscaling:TerminateInstanceInAutoScalingGroup ], Resource : * }, { Sid : VisualEditor1 , Effect : Allow , Action : [ autoscaling:DescribeAutoScalingInstances , autoscaling:DescribeAutoScalingGroups , ec2:DescribeLaunchTemplateVersions , autoscaling:DescribeTags , autoscaling:DescribeLaunchConfigurations , ec2:DescribeInstanceTypes ], Resource : * } ]}EOFThe above should be enough for CAS to do its job. Next, create the policy: aws iam create-policy \ --policy-name eks-${RD_REGION}-${RD_CLUSTER_NAME}-ClusterAutoscalerPolicy \ --policy-document file://policy. json IMPORTANT: Take note of the returned policy ARN. You will need that below. Create the IAM role and k8s service account pair: The Cluster Autoscaler needs a service account in the k8s cluster that’sassociated with an IAM role that consumes the policy document we created in theprevious section. This is normally a two-step process but can be created in asingle command using eksctl: For more information on what eksctl is doing under the covers, see How ItWorks from theeksctl documentation for IAM Roles for Service Accounts. export RD_POLICY_ARN= <Get this value from the last command's output> eksctl create iamserviceaccount \ --cluster=${RD_CLUSTER_NAME} \ --region=${RD_REGION} \ --namespace=kube-system \ --name=cluster-autoscaler \ --attach-policy-arn=${RD_POLICY_ARN} \ --override-existing-serviceaccounts \ --approveDouble check that the cluster-autoscaler service account has been correctlyannotated with the IAM role that was created by eksctl in the same step: kubectl get sa cluster-autoscaler -n kube-system -ojson | \ jq -r '. metadata. annotations | . eks. amazonaws. com/role-arn 'Example output: arn:aws:iam::365499461711:role/eksctl-my-cluster-addon-iamserviceaccount-. . . Check from the AWS Console if the above role contains the policy that we createdearlier. Deploy Cluster Autoscaler: First, find the most recent Cluster Autoscaler version that has the same MAJORand MINOR version as the kubernetes cluster you’re deploying to. Get the kube cluster’s version: kubectl version -ojson | jq -r . serverVersion. gitVersionExample output: v1. 27. 4-eks-2d98532Choose the appropriate version for CAS. You can get the latest ClusterAutoscaler versions from its Github ReleasesPage. Example: export CLUSTER_AUTOSCALER_VERSION=1. 27. 3Next, deploy the cluster autoscaler using the deployment template that Iprepared in the companionrepo envsubst < <(curl https://raw. githubusercontent. com/relaxdiego/kubevirt-cas-baremetal/main/cas-deployment. yaml. template) | \ kubectl apply -f -Check the cluster autoscaler status: kubectl get deploy,pod -l app=cluster-autoscaler -n kube-systemExample output: NAME READY UP-TO-DATE AVAILABLE AGEdeployment. apps/cluster-autoscaler 1/1 1 1 4m1sNAME READY STATUS RESTARTS AGEpod/cluster-autoscaler-6c58bd6d89-v8wbn 1/1 Running 0 60sTail the cluster-autoscaler pod’s logs to see what’s happening: kubectl -n kube-system logs -f deployment. apps/cluster-autoscalerBelow are example log entries from Cluster Autoscaler terminating an unneedednode: node ip-XXXX. YYYY. compute. internal may be removed. . . ip-XXXX. YYYY. compute. internal was unneeded for 1m3. 743475455sOnce the timeout has been reached (default: 10 minutes), CAS will scale downthe group: Scale-down: removing empty node ip-XXXX. YYYY. compute. internalEvent(v1. ObjectReference{Kind: ConfigMap , Namespace: kube-system , . . . Successfully added ToBeDeletedTaint on node ip-XXXX. YYYY. compute. internalTerminating EC2 instance: i-ZZZZDeleteInstances was called: . . . For more information on how Cluster Autoscaler scales down a node group, seeHow does scale-downwork?from the project’s FAQ. When you try to get the list of nodes, you should see the bare metal nodestainted such that they are no longer schedulable: NAME STATUS ROLES AGE VERSIONip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adIn a few more minutes, the nodes will be deleted. To try the scale up, just deploy a VM. Expanding Node Group eks-ng-eacf8ebb . . . Best option to resize: eks-ng-eacf8ebbEstimated 1 nodes needed in eks-ng-eacf8ebbFinal scale-up plan: [{eks-ng-eacf8ebb 0->1 (max: 3)}]Scale-up: setting group eks-ng-eacf8ebb size to 1Setting asg eks-ng-eacf8ebb size to 1Done: At this point you should have a working, auto-scaling EKS cluster that can hostVMs on bare metal nodes. If you have any questions, ask themhere. References: Amazon EKS Autoscaling Cluster Autoscaler in Plain English AWS EKS Best Practices Guide IAM roles for service accounts eksctl create iamserviceaccount" }, { "id": 6, "url": "/2023/Managing-KubeVirt-VMs-with-Ansible.html", @@ -363,7 +363,7 @@

    "title": "NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN, NetworkPolicy", - "body": "Introduction: Kubernetes NetworkPolicies are constructs to control traffic flow at the IPaddress or port level (OSI layers 3 or 4). They allow the user to specify how a pod (or group of pods) is allowed tocommunicate with other entities on the network. In simpler words: the user canspecify ingress from or egress to other workloads, using L3 / L4 semantics. Keeping in mind NetworkPolicy is a Kubernetes construct - which only caresabout a single network interface - they are only usable for the cluster’sdefault network interface. This leaves a considerable gap for Virtual Machineusers, since they are heavily invested in secondary networks. The k8snetworkplumbingwg has addressed this limitation by providing aMultiNetworkPolicy CRD - it features the exact same API as NetworkPolicybut can target network-attachment-definitions. OVN-Kubernetes implements this API, and configures access control accordinglyfor secondary networks in the cluster. In this post we will see how we can govern access control for VMs using themulti-network policy API. On our simple example, we’ll only allow into our VMsfor traffic ingressing from a particular CIDR range. Current limitations of MultiNetworkPolicies for VMs: Kubernetes NetworkPolicy has three types of policy peers: namespace selectors: allows ingress-from, egress-to based on the peer’s namespace labels pod selectors: allows ingress-from, egress-to based on the peer’s labels ip block: allows ingress-from, egress-to based on the peer’s IP addressWhile MultiNetworkPolicy allows these three types, when used with VMs werecommend using only the IPBlock policy peer - both namespace and podselectors prevent the live-migration of Virtual Machines (these policy peersrequire OVN-K managed IPAM, and currently the live-migration feature is onlyavailable when IPAM is not enabled on the interfaces). Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirt Multi-Network policy APIThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,upstream latest multus-cni, and the multi-network policy CRDs deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster (one control plane, and two workernodes), configured to use OVN-Kubernetes as the default cluster network,configuring the multi-homing OVN-Kubernetes feature gate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v1. 0. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Limiting ingress to a KubeVirt VM: In this example, we will configure a MultiNetworkPolicy allowing ingress intoour VMs only from a particular CIDR range - let’s say 10. 200. 0. 0/30. Provision the following NAD (to allow our VMs to live-migrate, we do not definea subnet): ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: flatl2netspec: config: |2 { cniVersion : 0. 4. 0 , name : flatl2net , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/flatl2net }Let’s now provision our six VMs, with the following name to IP address(statically configured via cloud-init) association: vm1: 10. 200. 0. 1 vm2: 10. 200. 0. 2 vm3: 10. 200. 0. 3 vm4: 10. 200. 0. 4 vm5: 10. 200. 0. 5 vm6: 10. 200. 0. 6---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm1 name: vm1spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm1 kubevirt. io/vm: vm1 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 1/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm2 name: vm2spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm2 kubevirt. io/vm: vm2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 2/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm3 name: vm3spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm3 kubevirt. io/vm: vm3 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 3/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm4 name: vm4spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm4 kubevirt. io/vm: vm4 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 4/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm5 name: vm5spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm5 kubevirt. io/vm: vm5 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 5/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm6 name: vm6spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm6 kubevirt. io/vm: vm6 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 6/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdiskNOTE: it is important to highlight all the Virtual Machines (and thenetwork-attachment-definition) are defined in the default namespace. After this step, we should have the following deployment: Let’s check the VMs vm1 and vm4 can ping their peers in the same subnet. For that we willconnect to the VMs over their serial console: First, let’s check vm1: ➜ virtctl console vm1Successfully connected to vm1 console. The escape sequence is ^][fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=5. 16 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=1. 41 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=34. 2 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=2. 56 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 1. 406/10. 841/34. 239/13. 577 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 77 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 46 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=5. 47 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 74 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3007msrtt min/avg/max/mdev = 1. 459/3. 109/5. 469/1. 627 ms[fedora@vm1 ~]$ And from vm4: ➜ ~ virtctl console vm4Successfully connected to vm4 console. The escape sequence is ^][fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. 64 bytes from 10. 200. 0. 1: icmp_seq=1 ttl=64 time=3. 20 ms64 bytes from 10. 200. 0. 1: icmp_seq=2 ttl=64 time=1. 62 ms64 bytes from 10. 200. 0. 1: icmp_seq=3 ttl=64 time=1. 44 ms64 bytes from 10. 200. 0. 1: icmp_seq=4 ttl=64 time=0. 951 ms--- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 951/1. 803/3. 201/0. 843 ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=1. 85 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 02 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 27 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=0. 970 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 970/1. 275/1. 850/0. 350 msWe will now provision a MultiNetworkPolicy applying to all the VMs definedabove. To do this mapping correcly, the policy has to: Be in the same namespace as the VM. Set k8s. v1. cni. cncf. io/policy-for annotation matching the secondary network used by the VM. Set matchLabels selector matching the labels set on VM’sspec. template. metadata. This policy will allow ingress into these access-control labeled pods only if the traffic originates from within the 10. 200. 0. 0/30 CIDR range(IPs 10. 200. 0. 1-3). ---apiVersion: k8s. cni. cncf. io/v1beta1kind: MultiNetworkPolicymetadata: name: ingress-ipblock annotations: k8s. v1. cni. cncf. io/policy-for: default/flatl2netspec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10. 200. 0. 0/30Taking into account our example, onlyvm1, vm2, and vm3 will be able to contact any of its peers, as picturedby the following diagram: Let’s try again the ping after provisioning the MultiNetworkPolicy object: From vm1 (inside the allowed ip block range): [fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=6. 48 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=4. 40 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=1. 28 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=1. 51 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 283/3. 418/6. 483/2. 154 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 81 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=2. 67 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 68 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 63 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 630/2. 446/3. 808/0. 888 msFrom vm4 (outside the allowed ip block range): [fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. --- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3083ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. --- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3089msConclusions: In this post we’ve shown how MultiNetworkPolicies can be used to provideaccess control to VMs with secondary network interfaces. We have provided a comprehensive example on how a policy can be used to limitingress to our VMs only from desired sources, based on the client’s IP address. " + "body": "Introduction: Kubernetes NetworkPolicies are constructs to control traffic flow at the IPaddress or port level (OSI layers 3 or 4). They allow the user to specify how a pod (or group of pods) is allowed tocommunicate with other entities on the network. In simpler words: the user canspecify ingress from or egress to other workloads, using L3 / L4 semantics. Keeping in mind NetworkPolicy is a Kubernetes construct - which only caresabout a single network interface - they are only usable for the cluster’sdefault network interface. This leaves a considerable gap for Virtual Machineusers, since they are heavily invested in secondary networks. The k8snetworkplumbingwg has addressed this limitation by providing aMultiNetworkPolicy CRD - it features the exact same API as NetworkPolicybut can target network-attachment-definitions. OVN-Kubernetes implements this API, and configures access control accordinglyfor secondary networks in the cluster. In this post we will see how we can govern access control for VMs using themulti-network policy API. On our simple example, we’ll only allow into our VMsfor traffic ingressing from a particular CIDR range. Current limitations of MultiNetworkPolicies for VMs: Kubernetes NetworkPolicy has three types of policy peers: namespace selectors: allows ingress-from, egress-to based on the peer’s namespace labels pod selectors: allows ingress-from, egress-to based on the peer’s labels ip block: allows ingress-from, egress-to based on the peer’s IP addressWhile MultiNetworkPolicy allows these three types, when used with VMs werecommend using only the IPBlock policy peer - both namespace and podselectors prevent the live-migration of Virtual Machines (these policy peersrequire OVN-K managed IPAM, and currently the live-migration feature is onlyavailable when IPAM is not enabled on the interfaces). Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirt Multi-Network policy APIThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,upstream latest multus-cni, and the multi-network policy CRDs deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster (one control plane, and two workernodes), configured to use OVN-Kubernetes as the default cluster network,configuring the multi-homing OVN-Kubernetes feature gate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v1. 0. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Limiting ingress to a KubeVirt VM: In this example, we will configure a MultiNetworkPolicy allowing ingress intoour VMs only from a particular CIDR range - let’s say 10. 200. 0. 0/30. Provision the following NAD (to allow our VMs to live-migrate, we do not definea subnet): ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: flatl2netspec: config: |2 { cniVersion : 0. 4. 0 , name : flatl2net , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/flatl2net }Let’s now provision our six VMs, with the following name to IP address(statically configured via cloud-init) association: vm1: 10. 200. 0. 1 vm2: 10. 200. 0. 2 vm3: 10. 200. 0. 3 vm4: 10. 200. 0. 4 vm5: 10. 200. 0. 5 vm6: 10. 200. 0. 6---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm1 name: vm1spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm1 kubevirt. io/vm: vm1 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 1/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm2 name: vm2spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm2 kubevirt. io/vm: vm2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 2/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm3 name: vm3spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm3 kubevirt. io/vm: vm3 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 3/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm4 name: vm4spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm4 kubevirt. io/vm: vm4 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 4/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm5 name: vm5spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm5 kubevirt. io/vm: vm5 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 5/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm6 name: vm6spec: runStrategy: Always template: metadata: labels: name: access-control kubevirt. io/domain: vm6 kubevirt. io/vm: vm6 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 6/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdiskNOTE: it is important to highlight all the Virtual Machines (and thenetwork-attachment-definition) are defined in the default namespace. After this step, we should have the following deployment: Let’s check the VMs vm1 and vm4 can ping their peers in the same subnet. For that we willconnect to the VMs over their serial console: First, let’s check vm1: ➜ virtctl console vm1Successfully connected to vm1 console. The escape sequence is ^][fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=5. 16 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=1. 41 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=34. 2 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=2. 56 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 1. 406/10. 841/34. 239/13. 577 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 77 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 46 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=5. 47 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 74 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3007msrtt min/avg/max/mdev = 1. 459/3. 109/5. 469/1. 627 ms[fedora@vm1 ~]$ And from vm4: ➜ ~ virtctl console vm4Successfully connected to vm4 console. The escape sequence is ^][fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. 64 bytes from 10. 200. 0. 1: icmp_seq=1 ttl=64 time=3. 20 ms64 bytes from 10. 200. 0. 1: icmp_seq=2 ttl=64 time=1. 62 ms64 bytes from 10. 200. 0. 1: icmp_seq=3 ttl=64 time=1. 44 ms64 bytes from 10. 200. 0. 1: icmp_seq=4 ttl=64 time=0. 951 ms--- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 951/1. 803/3. 201/0. 843 ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=1. 85 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 02 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 27 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=0. 970 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 970/1. 275/1. 850/0. 350 msWe will now provision a MultiNetworkPolicy applying to all the VMs definedabove. To do this mapping correcly, the policy has to: Be in the same namespace as the VM. Set k8s. v1. cni. cncf. io/policy-for annotation matching the secondary network used by the VM. Set matchLabels selector matching the labels set on VM’sspec. template. metadata. This policy will allow ingress into these access-control labeled pods only if the traffic originates from within the 10. 200. 0. 0/30 CIDR range(IPs 10. 200. 0. 1-3). ---apiVersion: k8s. cni. cncf. io/v1beta1kind: MultiNetworkPolicymetadata: name: ingress-ipblock annotations: k8s. v1. cni. cncf. io/policy-for: default/flatl2netspec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10. 200. 0. 0/30Taking into account our example, onlyvm1, vm2, and vm3 will be able to contact any of its peers, as picturedby the following diagram: Let’s try again the ping after provisioning the MultiNetworkPolicy object: From vm1 (inside the allowed ip block range): [fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=6. 48 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=4. 40 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=1. 28 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=1. 51 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 283/3. 418/6. 483/2. 154 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 81 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=2. 67 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 68 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 63 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 630/2. 446/3. 808/0. 888 msFrom vm4 (outside the allowed ip block range): [fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. --- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3083ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. --- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3089msConclusions: In this post we’ve shown how MultiNetworkPolicies can be used to provideaccess control to VMs with secondary network interfaces. We have provided a comprehensive example on how a policy can be used to limitingress to our VMs only from desired sources, based on the client’s IP address. " }, { "id": 8, "url": "/2023/KubeVirt-v1-has-landed.html", @@ -384,14 +384,14 @@

    "title": "Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN", - "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach. This secondary network topology is akin to the onedescribed in the flatL2 topology,but allows connectivity to the physical underlay. Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Single broadcast domain: In this scenario we will see how traffic from a single localnet network can beconnected to a physical network in the host using a dedicated bridge. This scenario does not use any VLAN encapsulation, thus is simpler, since thenetwork admin does not need to provision any VLANs in advance. Configuring the underlay: When you’ve started the KinD cluster with the --multi-network-enable flag anadditional OCI network was created, and attached to each of the KinD nodes. But still, further steps may be required, depending on the desired L2configuration. Let’s first create a dedicated OVS bridge, and attach the aforementionedvirtualized network to it: for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,localnet-network:ovsbr1doneThe first two commands are self-evident: you create an OVS bridge, and attacha port to it; the last one is not. In it, we’re using theOVN bridge mappingAPI to configure which OVS bridge must be used for each physical network. It creates a patch port between the OVN integration bridge - br-int - and theOVS bridge you tell it to, and traffic will be forwarded to/from it with thehelp of alocalnet port. NOTE: The provided mapping must match the name within thenet-attach-def. Spec. Config JSON, otherwise, the patch ports will not becreated. You will also have to configure an IP address on the bridge for theextra-network the kind script created. For that, you first need to identify thebridge’s name. In the example below we’re providing a command for the podmanruntime: podman network inspect underlay --format '{{ . NetworkInterface }}'podman3ip addr add 10. 128. 0. 1/24 dev podman3NOTE: for docker, please use the following command: ip a | grep `docker network inspect underlay --format '{{ index . IPAM. Config 0 Gateway }}'` | awk '{print $NF}'br-0aeb0318f71fip addr add 10. 128. 0. 1/24 dev br-0aeb0318f71fLet’s also use an IP in the same subnet as the network subnet (defined in theNAD). This IP address must be excluded from the IPAM pool (also on the NAD),otherwise the OVN-Kubernetes IPAM may assign it to a workload. Defining the OVN-Kubernetes networks: Once the underlay is configured, we can now provision the attachment configuration: ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: localnet-networkspec: config: |2 { cniVersion : 0. 3. 1 , name : localnet-network , type : ovn-k8s-cni-overlay , topology : localnet , subnets : 10. 128. 0. 0/24 , excludeSubnets : 10. 128. 0. 1/32 , netAttachDefName : default/localnet-network }It is required to list the gateway IP in the excludedSubnets attribute, thuspreventing OVN-Kubernetes from assigning that IP address to the workloads. Spin up the VMs: These two VMs can be used for the single broadcast domain scenario (no VLANs). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker2 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both VMs via ICMP: $ kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent, multus-status , interfaceName : eth0 , ipAddress : 10. 128. 0. 2 , ipAddresses : [ 10. 128. 0. 2 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : localnet , queueCount : 1 }]$ virtctl console vm-clientSuccessfully connected to vm-client console. The escape sequence is ^][fedora@vm-client ~]$ ping 10. 128. 0. 2PING 10. 128. 0. 2 (10. 128. 0. 2) 56(84) bytes of data. 64 bytes from 10. 128. 0. 2: icmp_seq=1 ttl=64 time=0. 808 ms64 bytes from 10. 128. 0. 2: icmp_seq=2 ttl=64 time=0. 478 ms64 bytes from 10. 128. 0. 2: icmp_seq=3 ttl=64 time=0. 536 ms64 bytes from 10. 128. 0. 2: icmp_seq=4 ttl=64 time=0. 507 ms--- 10. 128. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 478/0. 582/0. 808/0. 131 msCheck underlay services: We can now start HTTP servers listening to the IPs attached onthe gateway: python3 -m http. server --bind 10. 128. 0. 1 9000And finally curl this from your client: [fedora@vm-client ~]$ curl -v 10. 128. 0. 1:9000* Trying 10. 128. 0. 1:9000. . . * Connected to 10. 128. 0. 1 (10. 128. 0. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 10. 128. 0. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . Multiple physical networks pointing to the same OVS bridge: This example will feature 2 physical networks, each with a different VLAN,both pointing at the same OVS bridge. Configuring the underlay: Again, the first thing to do is create a dedicated OVS bridge, and attach theaforementioned virtualized network to it, while defining it as a trunk portfor two broadcast domains, with tags 10 and 20. for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 trunks=10,20 vlan_mode=trunk kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,tenantblue:ovsbr1,tenantred:ovsbr1doneWe must now configure the physical network; since the packets are leaving theOVS bridge tagged with either the 10 or 20 VLAN, we must configure the physicalnetwork where the virtualized nodes run to handle the tagged traffic. For that we will create two VLANed interfaces, each with a different subnet; wewill need to know the name of the bridge the kind script created to implementthe extra network it required. Those VLAN interfaces also need to be configuredwith an IP address: (for docker see previous example) podman network inspect underlay --format '{{ . NetworkInterface }}'podman3# create the VLANsip link add link podman3 name podman3. 10 type vlan id 10ip addr add 192. 168. 123. 1/24 dev podman3. 10ip link set dev podman3. 10 upip link add link podman3 name podman3. 20 type vlan id 20ip addr add 192. 168. 124. 1/24 dev podman3. 20ip link set dev podman3. 20 upNOTE: both the tenantblue and tenantred networks forward their trafficto the ovsbr1 OVS bridge. Defining the OVN-Kubernetes networks: Let us now provision the attachment configuration for the two physical networks. Notice they do not have a subnet defined, which means our workloads mustconfigure static IPs via cloud-init. ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantredspec: config: |2 { cniVersion : 0. 3. 1 , name : tenantred , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 10, netAttachDefName : default/tenantred }---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantbluespec: config: |2 { cniVersion : 0. 3. 1 , name : tenantblue , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 20, netAttachDefName : default/tenantblue }NOTE: each of the tenantblue and tenantred networks tags their trafficwith a different VLAN, which must be listed on the port trunks configuration. Spin up the VMs: These two VMs can be used for the OVS bridge sharing scenario (two physicalnetworks share the same OVS bridge, each on a different VLAN). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-1spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-red bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-red multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-2spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: flatl2-overlay multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-1spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-2spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both red VMs via ICMP: $ kubectl get vmi vm-red-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 123. 20 , ipAddresses : [ 192. 168. 123. 20 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : flatl2-overlay , queueCount : 1 }]$ virtctl console vm-red-1Successfully connected to vm-red-1 console. The escape sequence is ^][fedora@vm-red-1 ~]$ ping 192. 168. 123. 20PING 192. 168. 123. 20 (192. 168. 123. 20) 56(84) bytes of data. 64 bytes from 192. 168. 123. 20: icmp_seq=1 ttl=64 time=0. 534 ms64 bytes from 192. 168. 123. 20: icmp_seq=2 ttl=64 time=0. 246 ms64 bytes from 192. 168. 123. 20: icmp_seq=3 ttl=64 time=0. 178 ms64 bytes from 192. 168. 123. 20: icmp_seq=4 ttl=64 time=0. 236 ms--- 192. 168. 123. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3028msrtt min/avg/max/mdev = 0. 178/0. 298/0. 534/0. 138 msThe same behavior can be seen on the VMs attached to the blue network: $ kubectl get vmi vm-blue-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 124. 20 , ipAddresses : [ 192. 168. 124. 20 , fe80::6cae:e4ff:fefc:bd02 ], mac : 6e:ae:e4:fc:bd:02 , name : physnet-blue , queueCount : 1 }]$ virtctl console vm-blue-1Successfully connected to vm-blue-1 console. The escape sequence is ^][fedora@vm-blue-1 ~]$ ping 192. 168. 124. 20PING 192. 168. 124. 20 (192. 168. 124. 20) 56(84) bytes of data. 64 bytes from 192. 168. 124. 20: icmp_seq=1 ttl=64 time=0. 531 ms64 bytes from 192. 168. 124. 20: icmp_seq=2 ttl=64 time=0. 255 ms64 bytes from 192. 168. 124. 20: icmp_seq=3 ttl=64 time=0. 688 ms64 bytes from 192. 168. 124. 20: icmp_seq=4 ttl=64 time=0. 648 ms--- 192. 168. 124. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3047msrtt min/avg/max/mdev = 0. 255/0. 530/0. 688/0. 169 msAccessing the underlay services: We can now start HTTP servers listening to the IPs attached on the VLANinterfaces: python3 -m http. server --bind 192. 168. 123. 1 9000 &python3 -m http. server --bind 192. 168. 124. 1 9000 &And finally curl this from your client (blue network): [fedora@vm-blue-1 ~]$ curl -v 192. 168. 124. 1:9000* Trying 192. 168. 124. 1:9000. . . * Connected to 192. 168. 124. 1 (192. 168. 124. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 124. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . And from the client connected to the red network: [fedora@vm-red-1 ~]$ curl -v 192. 168. 123. 1:9000* Trying 192. 168. 123. 1:9000. . . * Connected to 192. 168. 123. 1 (192. 168. 123. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 123. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:06:02 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923< . . . Conclusions: In this post we have seen how to use OVN-Kubernetes to create secondarynetworks connected to the physical underlay, allowing both east/westcommunication between VMs, and access to services running outside theKubernetes cluster. " + "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach. This secondary network topology is akin to the onedescribed in the flatL2 topology,but allows connectivity to the physical underlay. Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Single broadcast domain: In this scenario we will see how traffic from a single localnet network can beconnected to a physical network in the host using a dedicated bridge. This scenario does not use any VLAN encapsulation, thus is simpler, since thenetwork admin does not need to provision any VLANs in advance. Configuring the underlay: When you’ve started the KinD cluster with the --multi-network-enable flag anadditional OCI network was created, and attached to each of the KinD nodes. But still, further steps may be required, depending on the desired L2configuration. Let’s first create a dedicated OVS bridge, and attach the aforementionedvirtualized network to it: for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,localnet-network:ovsbr1doneThe first two commands are self-evident: you create an OVS bridge, and attacha port to it; the last one is not. In it, we’re using theOVN bridge mappingAPI to configure which OVS bridge must be used for each physical network. It creates a patch port between the OVN integration bridge - br-int - and theOVS bridge you tell it to, and traffic will be forwarded to/from it with thehelp of alocalnet port. NOTE: The provided mapping must match the name within thenet-attach-def. Spec. Config JSON, otherwise, the patch ports will not becreated. You will also have to configure an IP address on the bridge for theextra-network the kind script created. For that, you first need to identify thebridge’s name. In the example below we’re providing a command for the podmanruntime: podman network inspect underlay --format '{{ . NetworkInterface }}'podman3ip addr add 10. 128. 0. 1/24 dev podman3NOTE: for docker, please use the following command: ip a | grep `docker network inspect underlay --format '{{ index . IPAM. Config 0 Gateway }}'` | awk '{print $NF}'br-0aeb0318f71fip addr add 10. 128. 0. 1/24 dev br-0aeb0318f71fLet’s also use an IP in the same subnet as the network subnet (defined in theNAD). This IP address must be excluded from the IPAM pool (also on the NAD),otherwise the OVN-Kubernetes IPAM may assign it to a workload. Defining the OVN-Kubernetes networks: Once the underlay is configured, we can now provision the attachment configuration: ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: localnet-networkspec: config: |2 { cniVersion : 0. 3. 1 , name : localnet-network , type : ovn-k8s-cni-overlay , topology : localnet , subnets : 10. 128. 0. 0/24 , excludeSubnets : 10. 128. 0. 1/32 , netAttachDefName : default/localnet-network }It is required to list the gateway IP in the excludedSubnets attribute, thuspreventing OVN-Kubernetes from assigning that IP address to the workloads. Spin up the VMs: These two VMs can be used for the single broadcast domain scenario (no VLANs). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker2 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both VMs via ICMP: $ kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent, multus-status , interfaceName : eth0 , ipAddress : 10. 128. 0. 2 , ipAddresses : [ 10. 128. 0. 2 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : localnet , queueCount : 1 }]$ virtctl console vm-clientSuccessfully connected to vm-client console. The escape sequence is ^][fedora@vm-client ~]$ ping 10. 128. 0. 2PING 10. 128. 0. 2 (10. 128. 0. 2) 56(84) bytes of data. 64 bytes from 10. 128. 0. 2: icmp_seq=1 ttl=64 time=0. 808 ms64 bytes from 10. 128. 0. 2: icmp_seq=2 ttl=64 time=0. 478 ms64 bytes from 10. 128. 0. 2: icmp_seq=3 ttl=64 time=0. 536 ms64 bytes from 10. 128. 0. 2: icmp_seq=4 ttl=64 time=0. 507 ms--- 10. 128. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 478/0. 582/0. 808/0. 131 msCheck underlay services: We can now start HTTP servers listening to the IPs attached onthe gateway: python3 -m http. server --bind 10. 128. 0. 1 9000And finally curl this from your client: [fedora@vm-client ~]$ curl -v 10. 128. 0. 1:9000* Trying 10. 128. 0. 1:9000. . . * Connected to 10. 128. 0. 1 (10. 128. 0. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 10. 128. 0. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . Multiple physical networks pointing to the same OVS bridge: This example will feature 2 physical networks, each with a different VLAN,both pointing at the same OVS bridge. Configuring the underlay: Again, the first thing to do is create a dedicated OVS bridge, and attach theaforementioned virtualized network to it, while defining it as a trunk portfor two broadcast domains, with tags 10 and 20. for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 trunks=10,20 vlan_mode=trunk kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,tenantblue:ovsbr1,tenantred:ovsbr1doneWe must now configure the physical network; since the packets are leaving theOVS bridge tagged with either the 10 or 20 VLAN, we must configure the physicalnetwork where the virtualized nodes run to handle the tagged traffic. For that we will create two VLANed interfaces, each with a different subnet; wewill need to know the name of the bridge the kind script created to implementthe extra network it required. Those VLAN interfaces also need to be configuredwith an IP address: (for docker see previous example) podman network inspect underlay --format '{{ . NetworkInterface }}'podman3# create the VLANsip link add link podman3 name podman3. 10 type vlan id 10ip addr add 192. 168. 123. 1/24 dev podman3. 10ip link set dev podman3. 10 upip link add link podman3 name podman3. 20 type vlan id 20ip addr add 192. 168. 124. 1/24 dev podman3. 20ip link set dev podman3. 20 upNOTE: both the tenantblue and tenantred networks forward their trafficto the ovsbr1 OVS bridge. Defining the OVN-Kubernetes networks: Let us now provision the attachment configuration for the two physical networks. Notice they do not have a subnet defined, which means our workloads mustconfigure static IPs via cloud-init. ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantredspec: config: |2 { cniVersion : 0. 3. 1 , name : tenantred , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 10, netAttachDefName : default/tenantred }---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantbluespec: config: |2 { cniVersion : 0. 3. 1 , name : tenantblue , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 20, netAttachDefName : default/tenantblue }NOTE: each of the tenantblue and tenantred networks tags their trafficwith a different VLAN, which must be listed on the port trunks configuration. Spin up the VMs: These two VMs can be used for the OVS bridge sharing scenario (two physicalnetworks share the same OVS bridge, each on a different VLAN). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-1spec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-red bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-red multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-2spec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: flatl2-overlay multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-1spec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-2spec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both red VMs via ICMP: $ kubectl get vmi vm-red-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 123. 20 , ipAddresses : [ 192. 168. 123. 20 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : flatl2-overlay , queueCount : 1 }]$ virtctl console vm-red-1Successfully connected to vm-red-1 console. The escape sequence is ^][fedora@vm-red-1 ~]$ ping 192. 168. 123. 20PING 192. 168. 123. 20 (192. 168. 123. 20) 56(84) bytes of data. 64 bytes from 192. 168. 123. 20: icmp_seq=1 ttl=64 time=0. 534 ms64 bytes from 192. 168. 123. 20: icmp_seq=2 ttl=64 time=0. 246 ms64 bytes from 192. 168. 123. 20: icmp_seq=3 ttl=64 time=0. 178 ms64 bytes from 192. 168. 123. 20: icmp_seq=4 ttl=64 time=0. 236 ms--- 192. 168. 123. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3028msrtt min/avg/max/mdev = 0. 178/0. 298/0. 534/0. 138 msThe same behavior can be seen on the VMs attached to the blue network: $ kubectl get vmi vm-blue-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 124. 20 , ipAddresses : [ 192. 168. 124. 20 , fe80::6cae:e4ff:fefc:bd02 ], mac : 6e:ae:e4:fc:bd:02 , name : physnet-blue , queueCount : 1 }]$ virtctl console vm-blue-1Successfully connected to vm-blue-1 console. The escape sequence is ^][fedora@vm-blue-1 ~]$ ping 192. 168. 124. 20PING 192. 168. 124. 20 (192. 168. 124. 20) 56(84) bytes of data. 64 bytes from 192. 168. 124. 20: icmp_seq=1 ttl=64 time=0. 531 ms64 bytes from 192. 168. 124. 20: icmp_seq=2 ttl=64 time=0. 255 ms64 bytes from 192. 168. 124. 20: icmp_seq=3 ttl=64 time=0. 688 ms64 bytes from 192. 168. 124. 20: icmp_seq=4 ttl=64 time=0. 648 ms--- 192. 168. 124. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3047msrtt min/avg/max/mdev = 0. 255/0. 530/0. 688/0. 169 msAccessing the underlay services: We can now start HTTP servers listening to the IPs attached on the VLANinterfaces: python3 -m http. server --bind 192. 168. 123. 1 9000 &python3 -m http. server --bind 192. 168. 124. 1 9000 &And finally curl this from your client (blue network): [fedora@vm-blue-1 ~]$ curl -v 192. 168. 124. 1:9000* Trying 192. 168. 124. 1:9000. . . * Connected to 192. 168. 124. 1 (192. 168. 124. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 124. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . And from the client connected to the red network: [fedora@vm-red-1 ~]$ curl -v 192. 168. 123. 1:9000* Trying 192. 168. 123. 1:9000. . . * Connected to 192. 168. 123. 1 (192. 168. 123. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 123. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:06:02 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923< . . . Conclusions: In this post we have seen how to use OVN-Kubernetes to create secondarynetworks connected to the physical underlay, allowing both east/westcommunication between VMs, and access to services running outside theKubernetes cluster. " }, { "id": 11, "url": "/2023/OVN-kubernetes-secondary-networks.html", "title": "Secondary networks for KubeVirt VMs using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN", - "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach, which uses virtual networking infrastructure: thus, itis not required to provision VLANs or other physical network resources. Demo: To run this demo, you will need a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Please skip this section if yourcluster already features these components (e. g. Openshift). Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). Please skip thissection if you already have a running cluster with KubeVirt installed in it. export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Define the overlay network: Provision the following yaml to define the overlay which will configure thesecondary attachment for the KubeVirt VMs. Please refer to the OVN-Kubernetesuserdocumentationfor details into each of the knobs. cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: l2-network namespace: defaultspec: config: |2 { cniVersion : 0. 3. 1 , name : l2-network , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/l2-network }EOFThe above example will configure a cluster-wide overlay without a subnetdefined. This means the users will have to define static IPs for their VMs. It is also worth to point out the value of the netAttachDefName attributemust match the <namespace>/<name> of the surroundingNetworkAttachmentDefinition object. Spin up the VMs: cat <<EOF | kubectl apply -f ----apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: running: true template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: running: true template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }EOFProvision these two Virtual Machines, and wait for them to boot up. Test connectivity: To verify connectivity over our layer 2 overlay, we need first to ensure the IPaddress of the server VM; let’s query the VMI status for that: kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 10. 244. 2. 8 , ipAddresses : [ 10. 244. 2. 8 ], mac : 52:54:00:23:1c:c2 , name : default , queueCount : 1 }, { infoSource : domain, guest-agent , interfaceName : eth1 , ipAddress : 192. 0. 2. 20 , ipAddresses : [ 192. 0. 2. 20 , fe80::7cab:88ff:fe5b:39f ], mac : 7e:ab:88:5b:03:9f , name : flatl2-overlay , queueCount : 1 }]You can afterwards connect to them via console and ping vm-server: Note The user and password for this VMs is fedora; check the VM template spec cloudinit userData virtctl console vm-clientip a # confirm the IP address is the one set via cloud-init[fedora@vm-client ~]$ ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127. 0. 0. 1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:29:de:53 brd ff:ff:ff:ff:ff:ff altname enp1s0 inet 10. 0. 2. 2/24 brd 10. 0. 2. 255 scope global dynamic noprefixroute eth0 valid_lft 86313584sec preferred_lft 86313584sec inet6 fe80::5054:ff:fe29:de53/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 36:f9:29:65:66:55 brd ff:ff:ff:ff:ff:ff altname enp2s0 inet 192. 0. 2. 10/24 brd 192. 0. 2. 255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::34f9:29ff:fe65:6655/64 scope link valid_lft forever preferred_lft forever[fedora@vm-client ~]$ ping -c4 192. 0. 2. 20 # ping the vm-server static IPPING 192. 0. 2. 20 (192. 0. 2. 20) 56(84) bytes of data. 64 bytes from 192. 0. 2. 20: icmp_seq=1 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=2 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=3 ttl=64 time=0. 995 ms64 bytes from 192. 0. 2. 20: icmp_seq=4 ttl=64 time=0. 902 ms--- 192. 0. 2. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 902/0. 997/1. 046/0. 058 msConclusion: In this post we have seen how to use OVN-Kubernetes to create an overlay toconnect VMs in different nodes using secondary networks, without having toconfigure any physical networking infrastructure. " + "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach, which uses virtual networking infrastructure: thus, itis not required to provision VLANs or other physical network resources. Demo: To run this demo, you will need a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Please skip this section if yourcluster already features these components (e. g. Openshift). Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). Please skip thissection if you already have a running cluster with KubeVirt installed in it. export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Define the overlay network: Provision the following yaml to define the overlay which will configure thesecondary attachment for the KubeVirt VMs. Please refer to the OVN-Kubernetesuserdocumentationfor details into each of the knobs. cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: l2-network namespace: defaultspec: config: |2 { cniVersion : 0. 3. 1 , name : l2-network , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/l2-network }EOFThe above example will configure a cluster-wide overlay without a subnetdefined. This means the users will have to define static IPs for their VMs. It is also worth to point out the value of the netAttachDefName attributemust match the <namespace>/<name> of the surroundingNetworkAttachmentDefinition object. Spin up the VMs: cat <<EOF | kubectl apply -f ----apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: runStrategy: Always template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: runStrategy: Always template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }EOFProvision these two Virtual Machines, and wait for them to boot up. Test connectivity: To verify connectivity over our layer 2 overlay, we need first to ensure the IPaddress of the server VM; let’s query the VMI status for that: kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 10. 244. 2. 8 , ipAddresses : [ 10. 244. 2. 8 ], mac : 52:54:00:23:1c:c2 , name : default , queueCount : 1 }, { infoSource : domain, guest-agent , interfaceName : eth1 , ipAddress : 192. 0. 2. 20 , ipAddresses : [ 192. 0. 2. 20 , fe80::7cab:88ff:fe5b:39f ], mac : 7e:ab:88:5b:03:9f , name : flatl2-overlay , queueCount : 1 }]You can afterwards connect to them via console and ping vm-server: Note The user and password for this VMs is fedora; check the VM template spec cloudinit userData virtctl console vm-clientip a # confirm the IP address is the one set via cloud-init[fedora@vm-client ~]$ ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127. 0. 0. 1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:29:de:53 brd ff:ff:ff:ff:ff:ff altname enp1s0 inet 10. 0. 2. 2/24 brd 10. 0. 2. 255 scope global dynamic noprefixroute eth0 valid_lft 86313584sec preferred_lft 86313584sec inet6 fe80::5054:ff:fe29:de53/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 36:f9:29:65:66:55 brd ff:ff:ff:ff:ff:ff altname enp2s0 inet 192. 0. 2. 10/24 brd 192. 0. 2. 255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::34f9:29ff:fe65:6655/64 scope link valid_lft forever preferred_lft forever[fedora@vm-client ~]$ ping -c4 192. 0. 2. 20 # ping the vm-server static IPPING 192. 0. 2. 20 (192. 0. 2. 20) 56(84) bytes of data. 64 bytes from 192. 0. 2. 20: icmp_seq=1 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=2 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=3 ttl=64 time=0. 995 ms64 bytes from 192. 0. 2. 20: icmp_seq=4 ttl=64 time=0. 902 ms--- 192. 0. 2. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 902/0. 997/1. 046/0. 058 msConclusion: In this post we have seen how to use OVN-Kubernetes to create an overlay toconnect VMs in different nodes using secondary networks, without having toconfigure any physical networking infrastructure. " }, { "id": 12, "url": "/2023/KubeVirt-Summit-2023.html", @@ -482,7 +482,7 @@

    "title": "Load-balancer for virtual machines on bare metal Kubernetes clusters", "author" : "Ram Lavi", "tags" : "Kubevirt, kubernetes, virtual machine, VM, load-balancer, MetalLB", - "body": "Introduction: Over the last year, Kubevirt and MetalLB have shown to be powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address. As a Cluster administrator using an on-prem cluster without a network load-balancer, now it’s possible to use MetalLB operator to provide load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. MetalLB: MetalLB allows you to create Kubernetes services of type LoadBalancer, and provides network load-balancer implementation in on-prem clusters that don’t run on a cloud provider. MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes, Layer 2 and BGP: Layer 2 mode (ARP/NDP): This mode - which actually does not implement real load-balancing behavior - provides a failover mechanism where a single node owns the LoadBalancer service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. In this method, the MetalLB speaker pod announces the IPs in ARP (for IPv4) and NDP (for IPv6) protocols over the host network. From a network perspective, the node owning the service appears to have multiple IP addresses assigned to a network interface. After traffic is routed to the node, the service proxy sends the traffic to the application pods. BGP mode: This mode provides real load-balancing behavior, by establishing BGP peering sessions with the network routers - which advertise the external IPs of the LoadBalancer service, distributing the load over the nodes. To read more on MetalLB concepts, implementation and limitations, please read its documentation. Demo: Virtual machine with external IP and MetalLB load-balancer: With the following recipe we will end up with a nginx server running on a virtual machine, accessible outside the cluster using MetalLB load-balancer with Layer 2 mode. Demo environment setup: We are going to use kind provider as an ephemeral Kubernetes cluster. Prerequirements: First install kind on your machine following its installation guide. To use kind, you will also need to install docker. External IPs on macOS and Windows: This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer’s external IP if the IP space is within the docker IP space. On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind’s Configuration Guide. Deploying cluster: To start a kind cluster: kind create clusterIn order to interact with the specific cluster created: kubectl cluster-info --context kind-kindInstalling components: Installing MetalLB on the cluster: There are many ways to install MetalLB. For the sake of this example, we will install MetalLB via manifests. To do this, follow this guide. Confirm successful installation by waiting for MetalLB pods to have a status of Running: kubectl get pods -n metallb-system --watchInstalling Kubevirt on the cluster: Following Kubevirt user guide to install released version v0. 51. 0 export RELEASE=v0. 51. 0kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Network resources configuration: Setting Address Pool to be used by the LoadBalancer: In order to complete the Layer 2 mode configuration, we need to set a range of IP addresses for the LoadBalancer to use. On Linux we can use the docker kind network (macOS and Windows users see External IPs Prerequirement), so by using this command: docker network inspect -f '' kindYou should get the subclass you can set the IP range from. The output should contain a cidr such as 172. 18. 0. 0/16. Using this result we will create the following Layer 2 address pool with 172. 18. 1. 1-172. 18. 1. 16 range: cat <<EOF | kubectl apply -f -apiVersion: v1kind: ConfigMapmetadata: namespace: metallb-system name: configdata: config: | address-pools: - name: addresspool-sample1 protocol: layer2 addresses: - 172. 18. 1. 1-172. 18. 1. 16EOFNetwork utilization: Spin up a Virtual Machine running Nginx: Now it’s time to start-up a virtual machine running nginx using the following yaml. The virtual machine has a metallb-service=nginx we created to use when creating the service. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: fedora-nginx namespace: default labels: metallb-service: nginxspec: running: true template: metadata: labels: metallb-service: nginx spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/fedora-cloud-container-disk-demo name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } packages: - nginx runcmd: - [ systemctl , enable , --now , nginx ] name: cloudinitdiskEOFExpose the virtual machine with a typed LoadBalancer service: When creating the LoadBalancer typed service, we need to remember annotating the address-pool we want to use addresspool-sample1 and also add the selector metallb-service: nginx: cat <<EOF | kubectl apply -f -kind: ServiceapiVersion: v1metadata: name: metallb-nginx-svc namespace: default annotations: metallb. universe. tf/address-pool: addresspool-sample1spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ports: - name: tcp-5678 protocol: TCP port: 5678 targetPort: 80 type: LoadBalancer selector: metallb-service: nginxEOFNotice that the service got assigned with an external IP from the range assigned by the address pool: kubectl get service -n default metallb-nginx-svcExample output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmetallb-nginx-svc LoadBalancer 10. 96. 254. 136 172. 18. 1. 1 5678:32438/TCP 53sAccess the virtual machine from outside the cluster: Finally, we can check that the nginx server is accessible from outside the cluster: curl -s -o /dev/null 172. 18. 1. 1:5678 && echo URL exists Example output: URL existsNote that it may take a short while for the URL to work after setting the service. Doing this on your own cluster: Moving outside the demo example, one who would like use MetalLB on their real life cluster, should also take other considerations in mind: User privileges: you should have cluster-admin privileges on the cluster - in order to install MetalLB. IP Ranges for MetalLB: getting IP Address pools allocation for MetalLB depends on your cluster environment: If you’re running a bare-metal cluster in a shared host environment, you need to first reserve this IP Address pool from your hosting provider. Alternatively, if you’re running on a private cluster, you can use one of the private IP Address spaces (a. k. a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN. Conclusion: In this blog post we used MetalLB to expose a service using an external IP assigned to a virtual machine. This illustrates how virtual machine traffic can be load-balanced via a service. " + "body": "Introduction: Over the last year, Kubevirt and MetalLB have shown to be powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address. As a Cluster administrator using an on-prem cluster without a network load-balancer, now it’s possible to use MetalLB operator to provide load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. MetalLB: MetalLB allows you to create Kubernetes services of type LoadBalancer, and provides network load-balancer implementation in on-prem clusters that don’t run on a cloud provider. MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes, Layer 2 and BGP: Layer 2 mode (ARP/NDP): This mode - which actually does not implement real load-balancing behavior - provides a failover mechanism where a single node owns the LoadBalancer service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. In this method, the MetalLB speaker pod announces the IPs in ARP (for IPv4) and NDP (for IPv6) protocols over the host network. From a network perspective, the node owning the service appears to have multiple IP addresses assigned to a network interface. After traffic is routed to the node, the service proxy sends the traffic to the application pods. BGP mode: This mode provides real load-balancing behavior, by establishing BGP peering sessions with the network routers - which advertise the external IPs of the LoadBalancer service, distributing the load over the nodes. To read more on MetalLB concepts, implementation and limitations, please read its documentation. Demo: Virtual machine with external IP and MetalLB load-balancer: With the following recipe we will end up with a nginx server running on a virtual machine, accessible outside the cluster using MetalLB load-balancer with Layer 2 mode. Demo environment setup: We are going to use kind provider as an ephemeral Kubernetes cluster. Prerequirements: First install kind on your machine following its installation guide. To use kind, you will also need to install docker. External IPs on macOS and Windows: This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer’s external IP if the IP space is within the docker IP space. On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind’s Configuration Guide. Deploying cluster: To start a kind cluster: kind create clusterIn order to interact with the specific cluster created: kubectl cluster-info --context kind-kindInstalling components: Installing MetalLB on the cluster: There are many ways to install MetalLB. For the sake of this example, we will install MetalLB via manifests. To do this, follow this guide. Confirm successful installation by waiting for MetalLB pods to have a status of Running: kubectl get pods -n metallb-system --watchInstalling Kubevirt on the cluster: Following Kubevirt user guide to install released version v0. 51. 0 export RELEASE=v0. 51. 0kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Network resources configuration: Setting Address Pool to be used by the LoadBalancer: In order to complete the Layer 2 mode configuration, we need to set a range of IP addresses for the LoadBalancer to use. On Linux we can use the docker kind network (macOS and Windows users see External IPs Prerequirement), so by using this command: docker network inspect -f '' kindYou should get the subclass you can set the IP range from. The output should contain a cidr such as 172. 18. 0. 0/16. Using this result we will create the following Layer 2 address pool with 172. 18. 1. 1-172. 18. 1. 16 range: cat <<EOF | kubectl apply -f -apiVersion: v1kind: ConfigMapmetadata: namespace: metallb-system name: configdata: config: | address-pools: - name: addresspool-sample1 protocol: layer2 addresses: - 172. 18. 1. 1-172. 18. 1. 16EOFNetwork utilization: Spin up a Virtual Machine running Nginx: Now it’s time to start-up a virtual machine running nginx using the following yaml. The virtual machine has a metallb-service=nginx we created to use when creating the service. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: fedora-nginx namespace: default labels: metallb-service: nginxspec: runStrategy: Always template: metadata: labels: metallb-service: nginx spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/fedora-cloud-container-disk-demo name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } packages: - nginx runcmd: - [ systemctl , enable , --now , nginx ] name: cloudinitdiskEOFExpose the virtual machine with a typed LoadBalancer service: When creating the LoadBalancer typed service, we need to remember annotating the address-pool we want to use addresspool-sample1 and also add the selector metallb-service: nginx: cat <<EOF | kubectl apply -f -kind: ServiceapiVersion: v1metadata: name: metallb-nginx-svc namespace: default annotations: metallb. universe. tf/address-pool: addresspool-sample1spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ports: - name: tcp-5678 protocol: TCP port: 5678 targetPort: 80 type: LoadBalancer selector: metallb-service: nginxEOFNotice that the service got assigned with an external IP from the range assigned by the address pool: kubectl get service -n default metallb-nginx-svcExample output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmetallb-nginx-svc LoadBalancer 10. 96. 254. 136 172. 18. 1. 1 5678:32438/TCP 53sAccess the virtual machine from outside the cluster: Finally, we can check that the nginx server is accessible from outside the cluster: curl -s -o /dev/null 172. 18. 1. 1:5678 && echo URL exists Example output: URL existsNote that it may take a short while for the URL to work after setting the service. Doing this on your own cluster: Moving outside the demo example, one who would like use MetalLB on their real life cluster, should also take other considerations in mind: User privileges: you should have cluster-admin privileges on the cluster - in order to install MetalLB. IP Ranges for MetalLB: getting IP Address pools allocation for MetalLB depends on your cluster environment: If you’re running a bare-metal cluster in a shared host environment, you need to first reserve this IP Address pool from your hosting provider. Alternatively, if you’re running on a private cluster, you can use one of the private IP Address spaces (a. k. a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN. Conclusion: In this blog post we used MetalLB to expose a service using an external IP assigned to a virtual machine. This illustrates how virtual machine traffic can be load-balanced via a service. " }, { "id": 25, "url": "/2022/changelog-v0.51.0.html", @@ -531,7 +531,7 @@

    "title": "Running real-time workloads with improved performance", "author" : "Jordi Gil", "tags" : "kubevirt, kubernetes, virtual machine, VM, real-time, NUMA, CPUManager", - "body": "Motivation: It has been possible in KubeVirt for some time already to run a VM running with a RT kernel, however the performance of such workloads never achieved parity against running on top of a bare metal host virtualized. With the availability of NUMA and CPUManager as features in KubeVirt, we were close to a point where we had almost all the ingredients to deliver the recommended tunings in libvirt for achieving the low CPU latency needed for such workloads. We were missing two important settings: The ability to configure the VCPUs to run with real-time scheduling policy. Lock the VMs huge pages in RAM to prevent swapping. Setting up the Environment: To achieve the lowest latency possible in a given environment, first it needs to be configured to allow its resources to be consumed efficiently. The Cluster: The target node has to be configured to reserve memory for hugepages and the kernel to allow threads to run with real-time scheduling policy. The memory can be reserved as a kernel boot parameter or by changing the kernel’s page count at runtime. The kernel’s runtime scheduling limit can be adjusted either by installing a real-time kernel in the node (the recommended option), or changing the kernel’s setting kernel. sched_rt_runtime_us to equal -1, to allow for unlimited runtime of real-time scheduled threads. This kernel setting defines the time period to be devoted to running real-time threads. KubeVirt will detect if the node has been configured with unlimited runtime and will label the node with kubevirt. io/realtime to highlight the capacity of running real-time workloads. Later on we’ll come back to this label when we talk about how the workload is scheduled. It is also recommended tuning the node’s BIOS settings for optimal real-time performance is also recommended to achieve even lower CPU latencies. Consult with your hardware provider to obtain the information on how to best tune your equipment. KubeVirt: The VM will require to be granted fully dedicated CPUs and be able to use huge pages. These requirements can be achieved in KubeVirt by enabling the feature gates of CPUManager and NUMA in the KubeVirt CR. There is no dedicated feature gate to enable the new real-time optimizations. The Manifest: With the cluster configured to provide the dedicated resources for the workload, it’s time to review an example of a VM manifest using the optimizations for low CPU latency. The first focus is to reduce the VM’s I/O by limiting it’s devices to only serial console: spec. domain. devices. autoattachSerialConsole: truespec. domain. devices. autoattachMemBalloon: falsespec. domain. devices. autoattachGraphicsDevice: falseThe pod needs to have a guaranteed QoS for its memory and CPU resources, to make sure that the CPU manager will dedicate the requested CPUs to the pod. spec. domain. resources. request. cpu: 2spec. domain. resources. request. memory: 1Gispec. domain. resources. limits. cpu: 2spec. domain. resources. limits. memory: 1GiStill on the CPU front, we add the settings to instruct the KVM to give a clear visibility of the host’s features to the guest, request the CPU manager in the node to isolate the assigned CPUs and to make sure that the emulator and IO threads in the VM run in their own dedicated VCPU rather than sharing the computational time with the workload. spec. domain. cpu. model: host-passthroughspec. domain. cpu. dedicateCpuPlacement: truespec. domain. cpu. isolateEmulatorThread: truespec. domain. cpu. ioThreadsPolicy: autoWe also request the huge pages size and guaranteed NUMA topology that will pin the CPU and memory resources to a single NUMA node in the host. The Kubernetes scheduler will perform due diligence to schedule the pod in a node with enough free huge pages of the given size. spec. domain. cpu. numa. guestMappingPassthrough: {}spec. domain. memory. hugepages. pageSize: 1GiLastly, we define the new real-time settings to instruct KubeVirt to apply the real-time scheduling policy for the pinned VCPUs and lock the process memory to avoid from being swapped by the host. In this example, we’ll configure the workload to only apply the real-time scheduling policy to VCPU 0. spec. domain. cpu. realtime. mask: 0Alternatively, if no mask value is specified, all requested CPUs will be configured for real-time scheduling. spec. domain. cpu. realtime: {}The following yaml is a complete manifest including all the settings we just reviewed. ---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: fedora-realtime name: fedora-realtime namespace: pocspec: running: true template: metadata: labels: kubevirt. io/vm: fedora-realtime spec: domain: devices: autoattachSerialConsole: true autoattachMemBalloon: false autoattachGraphicsDevice: false disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi cpu: 2 limits: memory: 1Gi cpu: 2 cpu: model: host-passthrough dedicatedCpuPlacement: true isolateEmulatorThread: true ioThreadsPolicy: auto features: - name: tsc-deadline policy: require numa: guestMappingPassthrough: {} realtime: mask: 0 memory: hugepages: pageSize: 1Gi terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: quay. io/kubevirt/fedora-realtime-container-disk:20211008_5a22acb18 name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - tuned-adm profile realtime name: cloudinitdiskThe Deployment: Because the manifest has enabled the real-time setting, when deployed KubeVirt applies the node label selector so that the Kubernetes scheduler will place the deployment in a node that is able to run threads with real-time scheduling policy (node label kubevirt. io/realtime). But there’s more, because the manifest also specifies the pod’s resource need of dedicated CPUs, KubeVirt will also add the node selector of cpumanager=true to guarantee that the pod is able to use the assigned CPUs alone. And finally, the scheduler also takes care of guaranteeing that the target node has sufficient free huge pages of the specified size (1Gi in our example) to satisfy the memory requested. With all these validations checked, the pod is successfully scheduled. Key Takeaways: Being able to run real-time workloads in KubeVirt with lower CPU latency opens new possibilities and expands the use cases where KubeVirt can assist in migrating legacy VMs into the cloud. Real-time workloads are extremely sensitive to the amount of layers between the bare metal and its runtime: the more layers in between, the higher the latency will be. The changes introduced in KubeVirt help reduce such waste and provide lower CPU latencies as the hardware is more efficiently tuned. " + "body": "Motivation: It has been possible in KubeVirt for some time already to run a VM running with a RT kernel, however the performance of such workloads never achieved parity against running on top of a bare metal host virtualized. With the availability of NUMA and CPUManager as features in KubeVirt, we were close to a point where we had almost all the ingredients to deliver the recommended tunings in libvirt for achieving the low CPU latency needed for such workloads. We were missing two important settings: The ability to configure the VCPUs to run with real-time scheduling policy. Lock the VMs huge pages in RAM to prevent swapping. Setting up the Environment: To achieve the lowest latency possible in a given environment, first it needs to be configured to allow its resources to be consumed efficiently. The Cluster: The target node has to be configured to reserve memory for hugepages and the kernel to allow threads to run with real-time scheduling policy. The memory can be reserved as a kernel boot parameter or by changing the kernel’s page count at runtime. The kernel’s runtime scheduling limit can be adjusted either by installing a real-time kernel in the node (the recommended option), or changing the kernel’s setting kernel. sched_rt_runtime_us to equal -1, to allow for unlimited runtime of real-time scheduled threads. This kernel setting defines the time period to be devoted to running real-time threads. KubeVirt will detect if the node has been configured with unlimited runtime and will label the node with kubevirt. io/realtime to highlight the capacity of running real-time workloads. Later on we’ll come back to this label when we talk about how the workload is scheduled. It is also recommended tuning the node’s BIOS settings for optimal real-time performance is also recommended to achieve even lower CPU latencies. Consult with your hardware provider to obtain the information on how to best tune your equipment. KubeVirt: The VM will require to be granted fully dedicated CPUs and be able to use huge pages. These requirements can be achieved in KubeVirt by enabling the feature gates of CPUManager and NUMA in the KubeVirt CR. There is no dedicated feature gate to enable the new real-time optimizations. The Manifest: With the cluster configured to provide the dedicated resources for the workload, it’s time to review an example of a VM manifest using the optimizations for low CPU latency. The first focus is to reduce the VM’s I/O by limiting it’s devices to only serial console: spec. domain. devices. autoattachSerialConsole: truespec. domain. devices. autoattachMemBalloon: falsespec. domain. devices. autoattachGraphicsDevice: falseThe pod needs to have a guaranteed QoS for its memory and CPU resources, to make sure that the CPU manager will dedicate the requested CPUs to the pod. spec. domain. resources. request. cpu: 2spec. domain. resources. request. memory: 1Gispec. domain. resources. limits. cpu: 2spec. domain. resources. limits. memory: 1GiStill on the CPU front, we add the settings to instruct the KVM to give a clear visibility of the host’s features to the guest, request the CPU manager in the node to isolate the assigned CPUs and to make sure that the emulator and IO threads in the VM run in their own dedicated VCPU rather than sharing the computational time with the workload. spec. domain. cpu. model: host-passthroughspec. domain. cpu. dedicateCpuPlacement: truespec. domain. cpu. isolateEmulatorThread: truespec. domain. cpu. ioThreadsPolicy: autoWe also request the huge pages size and guaranteed NUMA topology that will pin the CPU and memory resources to a single NUMA node in the host. The Kubernetes scheduler will perform due diligence to schedule the pod in a node with enough free huge pages of the given size. spec. domain. cpu. numa. guestMappingPassthrough: {}spec. domain. memory. hugepages. pageSize: 1GiLastly, we define the new real-time settings to instruct KubeVirt to apply the real-time scheduling policy for the pinned VCPUs and lock the process memory to avoid from being swapped by the host. In this example, we’ll configure the workload to only apply the real-time scheduling policy to VCPU 0. spec. domain. cpu. realtime. mask: 0Alternatively, if no mask value is specified, all requested CPUs will be configured for real-time scheduling. spec. domain. cpu. realtime: {}The following yaml is a complete manifest including all the settings we just reviewed. ---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: fedora-realtime name: fedora-realtime namespace: pocspec: runStrategy: Always template: metadata: labels: kubevirt. io/vm: fedora-realtime spec: domain: devices: autoattachSerialConsole: true autoattachMemBalloon: false autoattachGraphicsDevice: false disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi cpu: 2 limits: memory: 1Gi cpu: 2 cpu: model: host-passthrough dedicatedCpuPlacement: true isolateEmulatorThread: true ioThreadsPolicy: auto features: - name: tsc-deadline policy: require numa: guestMappingPassthrough: {} realtime: mask: 0 memory: hugepages: pageSize: 1Gi terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: quay. io/kubevirt/fedora-realtime-container-disk:20211008_5a22acb18 name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - tuned-adm profile realtime name: cloudinitdiskThe Deployment: Because the manifest has enabled the real-time setting, when deployed KubeVirt applies the node label selector so that the Kubernetes scheduler will place the deployment in a node that is able to run threads with real-time scheduling policy (node label kubevirt. io/realtime). But there’s more, because the manifest also specifies the pod’s resource need of dedicated CPUs, KubeVirt will also add the node selector of cpumanager=true to guarantee that the pod is able to use the assigned CPUs alone. And finally, the scheduler also takes care of guaranteeing that the target node has sufficient free huge pages of the specified size (1Gi in our example) to satisfy the memory requested. With all these validations checked, the pod is successfully scheduled. Key Takeaways: Being able to run real-time workloads in KubeVirt with lower CPU latency opens new possibilities and expands the use cases where KubeVirt can assist in migrating legacy VMs into the cloud. Real-time workloads are extremely sensitive to the amount of layers between the bare metal and its runtime: the more layers in between, the higher the latency will be. The changes introduced in KubeVirt help reduce such waste and provide lower CPU latencies as the hardware is more efficiently tuned. " }, { "id": 32, "url": "/2021/changelog-v0.46.0.html", @@ -601,14 +601,14 @@

    "title": "Using Intel vGPUs with Kubevirt", "author" : "Mark DeNeve", "tags" : "kubevirt, vGPU, Windows, GPU, Intel, minikube, Fedora", - "body": " Introduction Prerequisites Fedora Workstation Prep Preparing the Intel vGPU driver Install Kubernetes with minikube Install kubevirt Validate vGPU detection Install Containerize Data Importer Install Windows Accessing the Windows VM Using the GPUIntroduction: Graphical User Interfaces (GUIs) have come along way over the past few years and most modern desktop environments expect some form of GPU acceleration in order to give you a seamless user experience. If you have tried running things like Windows 10 within Kubevirt you may have noticed that the desktop experience felt a little slow. This is due to Windows 10 reliance on GPU acceleration. In addition many applications are also now taking advantage of GPU acceleration and it can even be used in web based applications such as “FishGL”: Without GPU hardware acceleration the user experience of a Virtual machine can be greatly impacted. Starting with 5th generation Intel Core processors that have embedded Intel graphics processing units it is possible to share the graphics processor between multiple virtual machines. In Linux, this sharing of a GPU is typically enabled through the use of mediated GPU devices, also known as vGPUs. Kubevirt has supported the use of GPUs including GPU passthrough and vGPU since v0. 22. 0 back in 2019. This support was centered around one specific vendor, and only worked with expensive enterprise class cards and required additional licensing. Starting with Kubevirt 0. 40 support for detecting and allocating the Intel based vGPUs has been added to Kubevirt. Support for the creation of these virtualized Intel GPUs is available in the Linux Kernel since the 4. 19 release. What does this meaning for you? You no longer need additional drivers or licenses to test out GPU accelerated virtual machines. The total number of Intel vGPUs you can create is dependent on your specific hardware as well as support for changing the Graphics aperture size and shared graphics memory within your BIOS. For more details on this see Create vGPU (KVMGT only) in the Intel GVTg wiki. Minimally configured devices can typically make at least two vGPU devices. You can reproduce this work on any Kubernetes cluster running kubevirt v0. 40. 0 or later, but the steps you need to take to load the kernel modules and enable the virtual devices will vary based on the underlying OS your Kubernetes cluster is running on. In order to demonstrate how you can enable this feature, we will use an all-in-one Kubernetes cluster built using Fedora 32 and minikube. Note This blog post is a more advanced topic and assumes some Linux and Kubernetes understanding. Prerequisites: Before we begin you will need a few things to make use of the Intel GPU: A workstation or server with a 5th Generation or higher Intel Core Processor, or E3_v4 or higher Xeon Processor and enough memory to virtualize one or more VMs A preinstalled Fedora 32 Workstation with at least 50Gb of free space in the “/” filesystem The following software: minikube - See minikube start virtctl - See kubevirt releases kubectl - See Install and Set Up kubectl on Linux A Windows 10 Install ISO Image - See Download Windows 10 Disk ImageFedora Workstation Prep: In order to use minikube on Fedora 32 we will be installing multiple applications that will be used throughout this demo. In addition we will be configuring the workstation to use cgroups v1 and we will be updating the firewall to allow proper communication to our Kubernetes cluster as well as any hosted applications. Finally we will be disabling SELinux per the minikube bare-metal install instructions: Note This post assumes that we are starting with a fresh install of Fedora 32. If you are using an existing configured Fedora 32 Workstation, you may have some software conflicts. sudo dnf update -ysudo dnf install -y pciutils podman podman-docker conntrack tigervnc rdesktopsudo grubby --update-kernel=ALL --args= systemd. unified_cgroup_hierarchy=0 # Setup firewall rules to allow inbound and outbound connections from your minikube clustersudo firewall-cmd --add-port=30000-65535/tcp --permanentsudo firewall-cmd --add-port=30000-65535/udp --permanentsudo firewall-cmd --add-port=10250-10252/tcp --permanentsudo firewall-cmd --add-port=10248/tcp --permanentsudo firewall-cmd --add-port=2379-2380/tcp --permanentsudo firewall-cmd --add-port=6443/tcp --permanentsudo firewall-cmd --add-port=8443/tcp --permanentsudo firewall-cmd --add-port=9153/tcp --permanentsudo firewall-cmd --add-service=dns --permanentsudo firewall-cmd --add-interface=cni-podman0 --permanentsudo firewall-cmd --add-masquerade --permanentsudo vi /etc/selinux/config# change the SELINUX=enforcing to SELINUX=permissive sudo setenforce 0sudo systemctl enable sshd --nowWe will now install the CRIO runtime: sudo dnf module enable -y cri-o:1. 18sudo dnf install -y cri-o cri-toolssudo systemctl enable --now crioPreparing the Intel vGPU driver: In order to make use of the Intel vGPU driver, we need to make a few changes to our all-in-one host. The commands below assume you are using a Fedora based host. If you are using a different base OS, be sure to update your commands for that specific distribution. The following commands will do the following: load the kvmgt module to enable support within kvm enable gvt in the i915 module update the Linux kernel to enable Intel IOMMUsudo sh -c echo kvmgt > /etc/modules-load. d/gpu-kvmgt. conf sudo grubby --update-kernel=ALL --args= intel_iommu=on i915. enable_gvt=1 sudo shutdown -r nowAfter the reboot check to ensure that the proper kernel modules have been loaded: $ sudo lsmod | grep kvmgtkvmgt 32768 0mdev 20480 2 kvmgt,vfio_mdevvfio 32768 3 kvmgt,vfio_mdev,vfio_iommu_type1kvm 798720 2 kvmgt,kvm_inteli915 2494464 4 kvmgtdrm 557056 4 drm_kms_helper,kvmgt,i915We will now create our vGPU devices. These virtual devices are created by echoing a GUID into a sys device created by the Intel driver. This needs to be done every time the system boots. The easiest way to do this is using a systemd service that runs on every boot. Before we create this systemd service, we need to validate the PCI ID of your Intel Graphics card. To do this we will use the lspci command $ sudo lspci00:00. 0 Host bridge: Intel Corporation Device 9b53 (rev 03)00:02. 0 VGA compatible controller: Intel Corporation Device 9bc8 (rev 03)00:08. 0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture ModelTake note that in the above output the Intel GPU is on “00:02. 0”. Now create the /etc/systemd/system/gvtg-enable. service but be sure to update the PCI ID as appropriate for your machine: cat > ~/gvtg-enable. service << EOF[Unit]Description=Create Intel GVT-g vGPU[Service]Type=oneshotExecStart=/bin/sh -c echo '56a4c4e2-c81f-4cba-82bf-af46c30ea32d' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStart=/bin/sh -c echo '973069b7-2025-406b-b3c9-301016af3150' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32d/remove ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/973069b7-2025-406b-b3c9-301016af3150/remove RemainAfterExit=yes[Install]WantedBy=multi-user. targetEOFsudo mv ~/gvtg-enable. service /etc/systemd/system/gvtg-enable. servicesudo systemctl enable gvtg-enable --nowNote The above systemd service will create two vGPU devices, you can repeat the commands with additional unique GUIDs up to a maximum of 8 vGPU if your particular hardware supports it. We can validate that the vGPU devices were created by looking in the /sys/devices/pci0000:00/0000:00:02. 0/ directory. $ ls -lsa /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32dtotal 0lrwxrwxrwx. 1 root root 0 Apr 20 13:56 driver -> . . /. . /. . /. . /bus/mdev/drivers/vfio_mdevdrwxr-xr-x. 2 root root 0 Apr 20 14:41 intel_vgpulrwxrwxrwx. 1 root root 0 Apr 20 14:41 iommu_group -> . . /. . /. . /. . /kernel/iommu_groups/8lrwxrwxrwx. 1 root root 0 Apr 20 14:41 mdev_type -> . . /mdev_supported_types/i915-GVTg_V5_8drwxr-xr-x. 2 root root 0 Apr 20 14:41 power--w-------. 1 root root 4096 Apr 20 14:41 removelrwxrwxrwx. 1 root root 0 Apr 20 13:56 subsystem -> . . /. . /. . /. . /bus/mdev-rw-r--r--. 1 root root 4096 Apr 20 13:56 ueventNote that “mdev_type” points to “i915-GVTg_V5_8”, this will come into play later when we configure kubevirt to detect the vGPU. Install Kubernetes with minikube: We will now install Kubernetes onto our Fedora Workstation. Minikube will help quickly set up our Kubernetes cluster environment. We will start by getting the latest release of minikube and kubectl. curl -LO https://storage. googleapis. com/minikube/releases/latest/minikube-linux-amd64sudo install minikube-linux-amd64 /usr/local/bin/minikubeVERSION=$(minikube kubectl version | head -1 | awk -F', ' {'print $3'} | awk -F':' {'print $2'} | sed s/\ //g)sudo install ${HOME}/. minikube/cache/linux/${VERSION}/kubectl /usr/local/binWe will be using the minikube driver “none” which will install Kubernetes directly onto this machine. This will allow you to maintain a copy of the virtual machines that you build through a reboot. Later in this post we will create persistent volumes for virtual machine storage in “/data”. As previously noted, ensure that you have at least 50Gb of free space in “/data” to complete this setup. The minikube install will take a few minutes to complete. $ sudo mkdir -p /data/winhd1-pv$ sudo minikube start --driver=none --container-runtime=crio😄 minikube v1. 19. 0 on Fedora 32✨ Using the none driver based on user configuration👍 Starting control plane node minikube in cluster minikube🤹 Running on localhost (CPUs=12, Memory=31703MB, Disk=71645MB) . . . ℹ️ OS release is Fedora 32 (Workstation Edition)🐳 Preparing Kubernetes v1. 20. 2 on Docker 20. 10. 6 . . . ▪ Generating certificates and keys . . . ▪ Booting up control plane . . . ▪ Configuring RBAC rules . . . 🤹 Configuring local host environment . . . 🔎 Verifying Kubernetes components. . . ▪ Using image gcr. io/k8s-minikube/storage-provisioner:v5🌟 Enabled addons: storage-provisioner, default-storageclass🏄 Done! kubectl is now configured to use minikube cluster and default namespace by defaultIn order to make our interaction with Kubernetes a little easier, we will need to copy some files and update our . kube/config mkdir -p ~/. minikube/profiles/minikubesudo cp -r /root/. kube /home/$USERsudo cp /root/. minikube/ca. crt /home/$USER/. minikube/ca. crtsudo cp /root/. minikube/profiles/minikube/client. crt /home/$USER/. minikube/profiles/minikubesudo cp /root/. minikube/profiles/minikube/client. key /home/$USER/. minikube/profiles/minikubesudo chown -R $USER:$USER /home/$USER/. kubesudo chown -R $USER:$USER /home/$USER/. minikubesed -i s/root/home\/$USER/ ~/. kube/configOnce the minikube install is complete, validate that everything is working properly. $ kubectl get nodesNAME STATUS ROLES AGE VERSIONkubevirt Ready control-plane,master 4m5s v1. 20. 2As long as you don’t get any errors, your base Kubernetes cluster is ready to go. Install kubevirt: Our all-in-one Kubernetes cluster is now ready for installing Installing Kubevirt. Using the minikube addons manager, we will install kubevirt into our cluster: sudo minikube addons enable kubevirtkubectl -n kubevirt wait kubevirt kubevirt --for condition=Available --timeout=300sAt this point, we need to update our instance of kubevirt in the cluster. We need to configure kubevirt to detect the Intel vGPU by giving it an mdevNameSelector to look for, and a resourceName to assign to it. The mdevNameSelector comes from the “mdev_type” that we identified earlier when we created the two virtual GPUs. When the kubevirt device manager finds instances of this mdev type, it will record this information and tag the node with the identified resourceName. We will use this resourceName later when we start up our virtual machine. cat > kubevirt-patch. yaml << EOFspec: configuration: developerConfiguration: featureGates: - GPU permittedHostDevices: mediatedDevices: - mdevNameSelector: i915-GVTg_V5_8 resourceName: intel. com/U630 EOFkubectl patch kubevirt kubevirt -n kubevirt --patch $(cat kubevirt-patch. yaml) --type=mergeWe now need to wait for kubevirt to reload its configuration. Validate vGPU detection: Now that kubevirt is installed and running, lets ensure that the vGPU was identified correctly. Describe the minikube node, using the command kubectl describe node and look for the “Capacity” section. If kubevirt properly detected the vGPU you will see an entry for “intel. com/U630” with a capacity value of greater than 0. $ kubectl describe nodeName: kubevirtRoles: control-plane,masterLabels: beta. kubernetes. io/arch=amd64 beta. kubernetes. io/os=linux. . . Capacity: cpu: 12 devices. kubevirt. io/kvm: 110 devices. kubevirt. io/tun: 110 devices. kubevirt. io/vhost-net: 110 ephemeral-storage: 71645Mi hugepages-1Gi: 0 hugepages-2Mi: 0 intel. com/U630: 2 memory: 11822640Ki pods: 110There it is, intel. com/U630 - two of them are available. Now all we need is a virtual machine to consume them. Install Containerize Data Importer: In order to install Windows 10, we are going to need to upload a Windows 10 install ISO to the cluster. This can be facilitated through the use of the Containerized Data Importer. The following steps are taken from the Experiment with the Containerized Data Importer (CDI) web page: export VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator. yamlkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr. yamlkubectl -n cdi wait cdi cdi --for condition=Available --timeout=300sNow that our CDI is available, we will expose it for consumption using a nodePort. This will allow us to connect to the cdi-proxy in the next steps. cat > cdi-nodeport. yaml << EOFapiVersion: v1kind: Servicemetadata: name: cdi-proxy-nodeport namespace: cdispec: type: NodePort selector: cdi. kubevirt. io: cdi-uploadproxy ports: - port: 8443 nodePort: 30443EOFkubectl create -f cdi-nodeport. yamlOne final step, lets get the latest release of virtctl which we will be using as we install Windows. VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64sudo install virtctl /usr/local/binInstall Windows: At this point we can now install a Windows VM in order to test this feature. The steps below are based on KubeVirt: installing Microsoft Windows from an ISO however we will be using Windows 10 instead of Windows Server 2012. The commands below assume that you have a Windows 10 ISO file called win10-virtio. iso. If you need a Windows 10 CD, please see Download Windows 10 Disk Image and come back here after you have obtained your install CD. $ virtctl image-upload \ --image-path=win10-virtio. iso \ --pvc-name=iso-win10 \ --access-mode=ReadWriteOnce \ --pvc-size=6G \ --uploadproxy-url=https://127. 0. 0. 1:30443 \ --insecure \ --wait-secs=240We need a place to store our Windows 10 virtual disk, use the following to create a 40Gb space to store our file. In order to do this within minikube we will manually create a PersistentVolume (PV) as well as a PersistentVolumeClaim (PVC). These steps assume that you have 45+ GiB of free space in “/”. We will create a “/data” directory as well as a subdirectory for storing our PV. If you do not have at least 45 GiB of free space in “/”, you will need to free up space, or mount storage on “/data” to continue. cat > win10-pvc. yaml << EOF---apiVersion: v1kind: PersistentVolumemetadata: name: pvwinhd1spec: accessModes: - ReadWriteOnce capacity: storage: 43Gi claimRef: namespace: default name: winhd1 hostPath: path: /data/winhd1-pv---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhd1spec: accessModes: - ReadWriteOnce resources: requests: storage: 40GiEOFkubectl create -f win10-pvc. yamlWe can now create our Windows 10 virtual machine. Use the following to create a virtual machine definition file that includes a vGPU: cat > win10vm1. yaml << EOFapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win10vm1spec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/domain: win10vm1 spec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 1 sockets: 2 threads: 1 devices: gpus: - deviceName: intel. com/U630 name: gpu1 disks: - cdrom: bus: sata name: windows-guest-tools - bootOrder: 1 cdrom: bus: sata name: cdrom - bootOrder: 2 disk: bus: sata name: disk-1 inputs: - bus: usb name: tablet type: tablet interfaces: - masquerade: {} model: e1000e name: nic-0 features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} machine: type: pc-q35-rhel8. 2. 0 resources: requests: memory: 8Gi hostname: win10vm1 networks: - name: nic-0 pod: {} terminationGracePeriodSeconds: 3600 volumes: - name: cdrom persistentVolumeClaim: claimName: iso-win10 - name: disk-1 persistentVolumeClaim: claimName: winhd1 - containerDisk: image: quay. io/kubevirt/virtio-container-disk name: windows-guest-toolsEOFkubectl create -f win10vm1. yamlNOTE This VM is not optimized to use virtio devices to simplify the OS install. By using SATA devices as well as an emulated e1000 network card, we do not need to worry about loading additional drivers. The key piece of information that we have added to this virtual machine definition is this snippet of yaml: devices: gpus: - deviceName: intel. com/U630 name: gpu1Here we are identifying the gpu device that we want to attach to this VM. The deviceName relates back to the name that we gave to kubevirt to identify the Intel GPU resources. It also is the same identifier that shows up in the “Capacity” section of a node when you run kubectl describe node. We can now start the virtual machine: virtctl start win10vm1kubectl get vmi --watchWhen the output of shows that the vm is in a “Running” phase you can “CTRL+C” to end the watch command. Accessing the Windows VM: Since we are running this VM on this local machine, we can now take advantage of the virtctl command to connect to the VNC console of the virtual machine. virtctl vnc win10vm1A new VNC Viewer window will open and you should now see the Windows 10 install screen. Follow standard Windows 10 install steps at this point. Once the install is complete you have a Windows 10 VM running with a GPU available. You can test that GPU acceleration is available by opening the Windows 10 task manager, selecting Advanced and then select the “Performance” tab. Note that the first time you start up, Windows is still detecting and installing the appropriate drivers. It may take a minute or two for the GPU information to show up in the Performance tab. Try testing out the GPU acceleration. Open a web browser in your VM and navigate to “https://webglsamples. org/fishtank/fishtank. html” HOWEVER don’t be surprised by the poor performance. The default kubevirt console does not take advantage of the GPU. For that we need to take one final step to use the Windows Remote Desktop Protocol (RDP) which can use the GPU. Using the GPU: In order to take advantage of the virtual GPU we have added, we will need to connect to the virtual machine over Remote Desktop Protocol (RDP). Follow these steps to enable RDP: In the Windows 10 search bar, type “Remote Desktop Settings” and then open the result. Select “Enable Remote Desktop” and confirm the change. Select “Advanced settings” and un-check “Require computers to use Network level Authentication”, and confirm this change. Finally reboot the Windows 10 Virtual machine. Now, run the following commands in order to expose the RDP server to outside your Kubernetes cluster: $ virtctl expose vm win10vm1 --port=3389 --type=NodePort --name=win10vm1-rdp$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10. 96. 0. 1 <none> 443/TCP 18hwin10vm1-rdp NodePort 10. 105. 159. 184 <none> 3389:30627/TCP 39sNote the port that was assigned to this service we will use it in the next step. In the above output the port is 30627. We can now use the rdesktop tool to connect to our VM and get the full advantages of the vGPU. From a command line run rdesktop localhost:<port> being sure to update the port based on the output from above. When prompted by rdesktop accept the certificate. Log into your Windows 10 client. You can now test out the vGPU. Let’s try FishGL again. Open a browser and go to https://webglsamples. org/fishtank/fishtank. html. You should notice a large improvement in the applications performance. You can also open the Task Manager and look at the performance tab to see the GPU under load. Note that since you are running your Fedora 32 workstation on this same GPU you are already sharing the graphics workload between your primary desktop, and the virtualized Windows Desktop also running on this machine. Congratulations! You now have a VM running in Kubernetes using an Intel vGPU. If your test machine has enough resources you can repeat the steps and create multiple virtual machines all sharing the one Intel GPU. " + "body": " Introduction Prerequisites Fedora Workstation Prep Preparing the Intel vGPU driver Install Kubernetes with minikube Install kubevirt Validate vGPU detection Install Containerize Data Importer Install Windows Accessing the Windows VM Using the GPUIntroduction: Graphical User Interfaces (GUIs) have come along way over the past few years and most modern desktop environments expect some form of GPU acceleration in order to give you a seamless user experience. If you have tried running things like Windows 10 within Kubevirt you may have noticed that the desktop experience felt a little slow. This is due to Windows 10 reliance on GPU acceleration. In addition many applications are also now taking advantage of GPU acceleration and it can even be used in web based applications such as “FishGL”: Without GPU hardware acceleration the user experience of a Virtual machine can be greatly impacted. Starting with 5th generation Intel Core processors that have embedded Intel graphics processing units it is possible to share the graphics processor between multiple virtual machines. In Linux, this sharing of a GPU is typically enabled through the use of mediated GPU devices, also known as vGPUs. Kubevirt has supported the use of GPUs including GPU passthrough and vGPU since v0. 22. 0 back in 2019. This support was centered around one specific vendor, and only worked with expensive enterprise class cards and required additional licensing. Starting with Kubevirt 0. 40 support for detecting and allocating the Intel based vGPUs has been added to Kubevirt. Support for the creation of these virtualized Intel GPUs is available in the Linux Kernel since the 4. 19 release. What does this meaning for you? You no longer need additional drivers or licenses to test out GPU accelerated virtual machines. The total number of Intel vGPUs you can create is dependent on your specific hardware as well as support for changing the Graphics aperture size and shared graphics memory within your BIOS. For more details on this see Create vGPU (KVMGT only) in the Intel GVTg wiki. Minimally configured devices can typically make at least two vGPU devices. You can reproduce this work on any Kubernetes cluster running kubevirt v0. 40. 0 or later, but the steps you need to take to load the kernel modules and enable the virtual devices will vary based on the underlying OS your Kubernetes cluster is running on. In order to demonstrate how you can enable this feature, we will use an all-in-one Kubernetes cluster built using Fedora 32 and minikube. Note This blog post is a more advanced topic and assumes some Linux and Kubernetes understanding. Prerequisites: Before we begin you will need a few things to make use of the Intel GPU: A workstation or server with a 5th Generation or higher Intel Core Processor, or E3_v4 or higher Xeon Processor and enough memory to virtualize one or more VMs A preinstalled Fedora 32 Workstation with at least 50Gb of free space in the “/” filesystem The following software: minikube - See minikube start virtctl - See kubevirt releases kubectl - See Install and Set Up kubectl on Linux A Windows 10 Install ISO Image - See Download Windows 10 Disk ImageFedora Workstation Prep: In order to use minikube on Fedora 32 we will be installing multiple applications that will be used throughout this demo. In addition we will be configuring the workstation to use cgroups v1 and we will be updating the firewall to allow proper communication to our Kubernetes cluster as well as any hosted applications. Finally we will be disabling SELinux per the minikube bare-metal install instructions: Note This post assumes that we are starting with a fresh install of Fedora 32. If you are using an existing configured Fedora 32 Workstation, you may have some software conflicts. sudo dnf update -ysudo dnf install -y pciutils podman podman-docker conntrack tigervnc rdesktopsudo grubby --update-kernel=ALL --args= systemd. unified_cgroup_hierarchy=0 # Setup firewall rules to allow inbound and outbound connections from your minikube clustersudo firewall-cmd --add-port=30000-65535/tcp --permanentsudo firewall-cmd --add-port=30000-65535/udp --permanentsudo firewall-cmd --add-port=10250-10252/tcp --permanentsudo firewall-cmd --add-port=10248/tcp --permanentsudo firewall-cmd --add-port=2379-2380/tcp --permanentsudo firewall-cmd --add-port=6443/tcp --permanentsudo firewall-cmd --add-port=8443/tcp --permanentsudo firewall-cmd --add-port=9153/tcp --permanentsudo firewall-cmd --add-service=dns --permanentsudo firewall-cmd --add-interface=cni-podman0 --permanentsudo firewall-cmd --add-masquerade --permanentsudo vi /etc/selinux/config# change the SELINUX=enforcing to SELINUX=permissive sudo setenforce 0sudo systemctl enable sshd --nowWe will now install the CRIO runtime: sudo dnf module enable -y cri-o:1. 18sudo dnf install -y cri-o cri-toolssudo systemctl enable --now crioPreparing the Intel vGPU driver: In order to make use of the Intel vGPU driver, we need to make a few changes to our all-in-one host. The commands below assume you are using a Fedora based host. If you are using a different base OS, be sure to update your commands for that specific distribution. The following commands will do the following: load the kvmgt module to enable support within kvm enable gvt in the i915 module update the Linux kernel to enable Intel IOMMUsudo sh -c echo kvmgt > /etc/modules-load. d/gpu-kvmgt. conf sudo grubby --update-kernel=ALL --args= intel_iommu=on i915. enable_gvt=1 sudo shutdown -r nowAfter the reboot check to ensure that the proper kernel modules have been loaded: $ sudo lsmod | grep kvmgtkvmgt 32768 0mdev 20480 2 kvmgt,vfio_mdevvfio 32768 3 kvmgt,vfio_mdev,vfio_iommu_type1kvm 798720 2 kvmgt,kvm_inteli915 2494464 4 kvmgtdrm 557056 4 drm_kms_helper,kvmgt,i915We will now create our vGPU devices. These virtual devices are created by echoing a GUID into a sys device created by the Intel driver. This needs to be done every time the system boots. The easiest way to do this is using a systemd service that runs on every boot. Before we create this systemd service, we need to validate the PCI ID of your Intel Graphics card. To do this we will use the lspci command $ sudo lspci00:00. 0 Host bridge: Intel Corporation Device 9b53 (rev 03)00:02. 0 VGA compatible controller: Intel Corporation Device 9bc8 (rev 03)00:08. 0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture ModelTake note that in the above output the Intel GPU is on “00:02. 0”. Now create the /etc/systemd/system/gvtg-enable. service but be sure to update the PCI ID as appropriate for your machine: cat > ~/gvtg-enable. service << EOF[Unit]Description=Create Intel GVT-g vGPU[Service]Type=oneshotExecStart=/bin/sh -c echo '56a4c4e2-c81f-4cba-82bf-af46c30ea32d' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStart=/bin/sh -c echo '973069b7-2025-406b-b3c9-301016af3150' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32d/remove ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/973069b7-2025-406b-b3c9-301016af3150/remove RemainAfterExit=yes[Install]WantedBy=multi-user. targetEOFsudo mv ~/gvtg-enable. service /etc/systemd/system/gvtg-enable. servicesudo systemctl enable gvtg-enable --nowNote The above systemd service will create two vGPU devices, you can repeat the commands with additional unique GUIDs up to a maximum of 8 vGPU if your particular hardware supports it. We can validate that the vGPU devices were created by looking in the /sys/devices/pci0000:00/0000:00:02. 0/ directory. $ ls -lsa /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32dtotal 0lrwxrwxrwx. 1 root root 0 Apr 20 13:56 driver -> . . /. . /. . /. . /bus/mdev/drivers/vfio_mdevdrwxr-xr-x. 2 root root 0 Apr 20 14:41 intel_vgpulrwxrwxrwx. 1 root root 0 Apr 20 14:41 iommu_group -> . . /. . /. . /. . /kernel/iommu_groups/8lrwxrwxrwx. 1 root root 0 Apr 20 14:41 mdev_type -> . . /mdev_supported_types/i915-GVTg_V5_8drwxr-xr-x. 2 root root 0 Apr 20 14:41 power--w-------. 1 root root 4096 Apr 20 14:41 removelrwxrwxrwx. 1 root root 0 Apr 20 13:56 subsystem -> . . /. . /. . /. . /bus/mdev-rw-r--r--. 1 root root 4096 Apr 20 13:56 ueventNote that “mdev_type” points to “i915-GVTg_V5_8”, this will come into play later when we configure kubevirt to detect the vGPU. Install Kubernetes with minikube: We will now install Kubernetes onto our Fedora Workstation. Minikube will help quickly set up our Kubernetes cluster environment. We will start by getting the latest release of minikube and kubectl. curl -LO https://storage. googleapis. com/minikube/releases/latest/minikube-linux-amd64sudo install minikube-linux-amd64 /usr/local/bin/minikubeVERSION=$(minikube kubectl version | head -1 | awk -F', ' {'print $3'} | awk -F':' {'print $2'} | sed s/\ //g)sudo install ${HOME}/. minikube/cache/linux/${VERSION}/kubectl /usr/local/binWe will be using the minikube driver “none” which will install Kubernetes directly onto this machine. This will allow you to maintain a copy of the virtual machines that you build through a reboot. Later in this post we will create persistent volumes for virtual machine storage in “/data”. As previously noted, ensure that you have at least 50Gb of free space in “/data” to complete this setup. The minikube install will take a few minutes to complete. $ sudo mkdir -p /data/winhd1-pv$ sudo minikube start --driver=none --container-runtime=crio😄 minikube v1. 19. 0 on Fedora 32✨ Using the none driver based on user configuration👍 Starting control plane node minikube in cluster minikube🤹 Running on localhost (CPUs=12, Memory=31703MB, Disk=71645MB) . . . ℹ️ OS release is Fedora 32 (Workstation Edition)🐳 Preparing Kubernetes v1. 20. 2 on Docker 20. 10. 6 . . . ▪ Generating certificates and keys . . . ▪ Booting up control plane . . . ▪ Configuring RBAC rules . . . 🤹 Configuring local host environment . . . 🔎 Verifying Kubernetes components. . . ▪ Using image gcr. io/k8s-minikube/storage-provisioner:v5🌟 Enabled addons: storage-provisioner, default-storageclass🏄 Done! kubectl is now configured to use minikube cluster and default namespace by defaultIn order to make our interaction with Kubernetes a little easier, we will need to copy some files and update our . kube/config mkdir -p ~/. minikube/profiles/minikubesudo cp -r /root/. kube /home/$USERsudo cp /root/. minikube/ca. crt /home/$USER/. minikube/ca. crtsudo cp /root/. minikube/profiles/minikube/client. crt /home/$USER/. minikube/profiles/minikubesudo cp /root/. minikube/profiles/minikube/client. key /home/$USER/. minikube/profiles/minikubesudo chown -R $USER:$USER /home/$USER/. kubesudo chown -R $USER:$USER /home/$USER/. minikubesed -i s/root/home\/$USER/ ~/. kube/configOnce the minikube install is complete, validate that everything is working properly. $ kubectl get nodesNAME STATUS ROLES AGE VERSIONkubevirt Ready control-plane,master 4m5s v1. 20. 2As long as you don’t get any errors, your base Kubernetes cluster is ready to go. Install kubevirt: Our all-in-one Kubernetes cluster is now ready for installing Installing Kubevirt. Using the minikube addons manager, we will install kubevirt into our cluster: sudo minikube addons enable kubevirtkubectl -n kubevirt wait kubevirt kubevirt --for condition=Available --timeout=300sAt this point, we need to update our instance of kubevirt in the cluster. We need to configure kubevirt to detect the Intel vGPU by giving it an mdevNameSelector to look for, and a resourceName to assign to it. The mdevNameSelector comes from the “mdev_type” that we identified earlier when we created the two virtual GPUs. When the kubevirt device manager finds instances of this mdev type, it will record this information and tag the node with the identified resourceName. We will use this resourceName later when we start up our virtual machine. cat > kubevirt-patch. yaml << EOFspec: configuration: developerConfiguration: featureGates: - GPU permittedHostDevices: mediatedDevices: - mdevNameSelector: i915-GVTg_V5_8 resourceName: intel. com/U630 EOFkubectl patch kubevirt kubevirt -n kubevirt --patch $(cat kubevirt-patch. yaml) --type=mergeWe now need to wait for kubevirt to reload its configuration. Validate vGPU detection: Now that kubevirt is installed and running, lets ensure that the vGPU was identified correctly. Describe the minikube node, using the command kubectl describe node and look for the “Capacity” section. If kubevirt properly detected the vGPU you will see an entry for “intel. com/U630” with a capacity value of greater than 0. $ kubectl describe nodeName: kubevirtRoles: control-plane,masterLabels: beta. kubernetes. io/arch=amd64 beta. kubernetes. io/os=linux. . . Capacity: cpu: 12 devices. kubevirt. io/kvm: 110 devices. kubevirt. io/tun: 110 devices. kubevirt. io/vhost-net: 110 ephemeral-storage: 71645Mi hugepages-1Gi: 0 hugepages-2Mi: 0 intel. com/U630: 2 memory: 11822640Ki pods: 110There it is, intel. com/U630 - two of them are available. Now all we need is a virtual machine to consume them. Install Containerize Data Importer: In order to install Windows 10, we are going to need to upload a Windows 10 install ISO to the cluster. This can be facilitated through the use of the Containerized Data Importer. The following steps are taken from the Experiment with the Containerized Data Importer (CDI) web page: export VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator. yamlkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr. yamlkubectl -n cdi wait cdi cdi --for condition=Available --timeout=300sNow that our CDI is available, we will expose it for consumption using a nodePort. This will allow us to connect to the cdi-proxy in the next steps. cat > cdi-nodeport. yaml << EOFapiVersion: v1kind: Servicemetadata: name: cdi-proxy-nodeport namespace: cdispec: type: NodePort selector: cdi. kubevirt. io: cdi-uploadproxy ports: - port: 8443 nodePort: 30443EOFkubectl create -f cdi-nodeport. yamlOne final step, lets get the latest release of virtctl which we will be using as we install Windows. VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64sudo install virtctl /usr/local/binInstall Windows: At this point we can now install a Windows VM in order to test this feature. The steps below are based on KubeVirt: installing Microsoft Windows from an ISO however we will be using Windows 10 instead of Windows Server 2012. The commands below assume that you have a Windows 10 ISO file called win10-virtio. iso. If you need a Windows 10 CD, please see Download Windows 10 Disk Image and come back here after you have obtained your install CD. $ virtctl image-upload \ --image-path=win10-virtio. iso \ --pvc-name=iso-win10 \ --access-mode=ReadWriteOnce \ --pvc-size=6G \ --uploadproxy-url=https://127. 0. 0. 1:30443 \ --insecure \ --wait-secs=240We need a place to store our Windows 10 virtual disk, use the following to create a 40Gb space to store our file. In order to do this within minikube we will manually create a PersistentVolume (PV) as well as a PersistentVolumeClaim (PVC). These steps assume that you have 45+ GiB of free space in “/”. We will create a “/data” directory as well as a subdirectory for storing our PV. If you do not have at least 45 GiB of free space in “/”, you will need to free up space, or mount storage on “/data” to continue. cat > win10-pvc. yaml << EOF---apiVersion: v1kind: PersistentVolumemetadata: name: pvwinhd1spec: accessModes: - ReadWriteOnce capacity: storage: 43Gi claimRef: namespace: default name: winhd1 hostPath: path: /data/winhd1-pv---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhd1spec: accessModes: - ReadWriteOnce resources: requests: storage: 40GiEOFkubectl create -f win10-pvc. yamlWe can now create our Windows 10 virtual machine. Use the following to create a virtual machine definition file that includes a vGPU: cat > win10vm1. yaml << EOFapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win10vm1spec: runStrategy: Halted template: metadata: creationTimestamp: null labels: kubevirt. io/domain: win10vm1 spec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 1 sockets: 2 threads: 1 devices: gpus: - deviceName: intel. com/U630 name: gpu1 disks: - cdrom: bus: sata name: windows-guest-tools - bootOrder: 1 cdrom: bus: sata name: cdrom - bootOrder: 2 disk: bus: sata name: disk-1 inputs: - bus: usb name: tablet type: tablet interfaces: - masquerade: {} model: e1000e name: nic-0 features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} machine: type: pc-q35-rhel8. 2. 0 resources: requests: memory: 8Gi hostname: win10vm1 networks: - name: nic-0 pod: {} terminationGracePeriodSeconds: 3600 volumes: - name: cdrom persistentVolumeClaim: claimName: iso-win10 - name: disk-1 persistentVolumeClaim: claimName: winhd1 - containerDisk: image: quay. io/kubevirt/virtio-container-disk name: windows-guest-toolsEOFkubectl create -f win10vm1. yamlNOTE This VM is not optimized to use virtio devices to simplify the OS install. By using SATA devices as well as an emulated e1000 network card, we do not need to worry about loading additional drivers. The key piece of information that we have added to this virtual machine definition is this snippet of yaml: devices: gpus: - deviceName: intel. com/U630 name: gpu1Here we are identifying the gpu device that we want to attach to this VM. The deviceName relates back to the name that we gave to kubevirt to identify the Intel GPU resources. It also is the same identifier that shows up in the “Capacity” section of a node when you run kubectl describe node. We can now start the virtual machine: virtctl start win10vm1kubectl get vmi --watchWhen the output of shows that the vm is in a “Running” phase you can “CTRL+C” to end the watch command. Accessing the Windows VM: Since we are running this VM on this local machine, we can now take advantage of the virtctl command to connect to the VNC console of the virtual machine. virtctl vnc win10vm1A new VNC Viewer window will open and you should now see the Windows 10 install screen. Follow standard Windows 10 install steps at this point. Once the install is complete you have a Windows 10 VM running with a GPU available. You can test that GPU acceleration is available by opening the Windows 10 task manager, selecting Advanced and then select the “Performance” tab. Note that the first time you start up, Windows is still detecting and installing the appropriate drivers. It may take a minute or two for the GPU information to show up in the Performance tab. Try testing out the GPU acceleration. Open a web browser in your VM and navigate to “https://webglsamples. org/fishtank/fishtank. html” HOWEVER don’t be surprised by the poor performance. The default kubevirt console does not take advantage of the GPU. For that we need to take one final step to use the Windows Remote Desktop Protocol (RDP) which can use the GPU. Using the GPU: In order to take advantage of the virtual GPU we have added, we will need to connect to the virtual machine over Remote Desktop Protocol (RDP). Follow these steps to enable RDP: In the Windows 10 search bar, type “Remote Desktop Settings” and then open the result. Select “Enable Remote Desktop” and confirm the change. Select “Advanced settings” and un-check “Require computers to use Network level Authentication”, and confirm this change. Finally reboot the Windows 10 Virtual machine. Now, run the following commands in order to expose the RDP server to outside your Kubernetes cluster: $ virtctl expose vm win10vm1 --port=3389 --type=NodePort --name=win10vm1-rdp$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10. 96. 0. 1 <none> 443/TCP 18hwin10vm1-rdp NodePort 10. 105. 159. 184 <none> 3389:30627/TCP 39sNote the port that was assigned to this service we will use it in the next step. In the above output the port is 30627. We can now use the rdesktop tool to connect to our VM and get the full advantages of the vGPU. From a command line run rdesktop localhost:<port> being sure to update the port based on the output from above. When prompted by rdesktop accept the certificate. Log into your Windows 10 client. You can now test out the vGPU. Let’s try FishGL again. Open a browser and go to https://webglsamples. org/fishtank/fishtank. html. You should notice a large improvement in the applications performance. You can also open the Task Manager and look at the performance tab to see the GPU under load. Note that since you are running your Fedora 32 workstation on this same GPU you are already sharing the graphics workload between your primary desktop, and the virtualized Windows Desktop also running on this machine. Congratulations! You now have a VM running in Kubernetes using an Intel vGPU. If your test machine has enough resources you can repeat the steps and create multiple virtual machines all sharing the one Intel GPU. " }, { "id": 42, "url": "/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html", "title": "Automated Windows Installation With Tekton Pipelines", "author" : "Filip Křepinský", "tags" : "kubevirt, Kubernetes, virtual machine, VM, Tekton Pipelines, KubeVirt Tekton Tasks, Windows", - "body": "Introduction: This blog shows how we can easily automate a process of installing Windows VMs on KubeVirt with Tekton Pipelines. Tekton Pipelines can be used to create a single Pipeline that encapsulates the installation process which can be run and replicated with PipelineRuns. The pipeline will be built with KubeVirt Tekton Tasks, which includes all the necessary tasks for this example. Pipeline Description: The pipeline will prepare an empty Persistent Volume Claim (PVC) and download a Windows source ISO into another PVC. Both of them will be initialized with Containerized Data Importer (CDI). It will then spin up an installation VM and use Windows Answer Files to automatically install the VM. Then the pipeline will wait for the installation to complete and will delete the installation VM while keeping the artifact PVC with the installed operating system. You can later use the artifact PVC as a base image and copy it for new VMs. Prerequisites: KubeVirt v0. 39. 0 Tekton Pipelines v0. 19. 0 KubeVirt Tekton Tasks v0. 3. 0Running Windows Installer Pipeline: Obtaining a URL of Windows Source ISO: First we have to obtain a Download URL of Windows Source ISO. Go to https://www. microsoft. com/en-us/software-download/windows10ISO. You can also obtain a server edition for evaluation at https://www. microsoft. com/en-us/evalcenter/evaluate-windows-server-2019. Fill in the edition and English language (other languages need to be updated in windows-10-autounattend ConfigMap below) and go to the download page. Right-click on the 64-bit download button and copy the download link. The link should be valid for 24 hours. We will need this URL a bit later when running the pipeline. Preparing autounattend. xml ConfigMap: Now we have to prepare our autounattend. xml Answer File with the installation instructions. We will store it in a ConfigMap, but optionally it can be stored in a Secret as well. The configuration file can be generated with Windows SIMor it can be specified manually according to Answer File Referenceand Answer File Components Reference. The following config map includes the required drivers and guest disk configuration. It also specifies how the installation should proceed and what users should be created. In our case it is an Administrator user with changepassword password. You can also change the Answer File according to your needs by consulting the already mentioned documentation. apiVersion: v1kind: ConfigMapmetadata: name: windows-10-autounattenddata: Autounattend. xml: | <?xml version= 1. 0 encoding= utf-8 ?> <unattend xmlns= urn:schemas-microsoft-com:unattend > <settings pass= windowsPE > <component name= Microsoft-Windows-PnpCustomizationsWinPE publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS processorArchitecture= amd64 xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State > <DriverPaths> <PathAndCredentials wcm:action= add wcm:keyValue= 1 > <Path>E:\viostor\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 2 > <Path>E:\NetKVM\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 3 > <Path>E:\viorng\w10\amd64</Path> </PathAndCredentials> </DriverPaths> </component> <component name= Microsoft-Windows-International-Core-WinPE processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SetupUILanguage> <UILanguage>en-US</UILanguage> </SetupUILanguage> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <DiskConfiguration> <Disk wcm:action= add > <CreatePartitions> <CreatePartition wcm:action= add > <Order>1</Order> <Type>Primary</Type> <Size>100</Size> </CreatePartition> <CreatePartition wcm:action= add > <Extend>true</Extend> <Order>2</Order> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>System Reserved</Label> <Order>1</Order> <PartitionID>1</PartitionID> <TypeID>0x27</TypeID> </ModifyPartition> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>OS</Label> <Letter>C</Letter> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> </OSImage> </ImageInstall> <UserData> <AcceptEula>true</AcceptEula> <FullName>Administrator</FullName> <Organization></Organization> <ProductKey> <Key>W269N-WFGWX-YVC9B-4J6C9-T83GX</Key> </ProductKey> </UserData> </component> </settings> <settings pass= offlineServicing > <component name= Microsoft-Windows-LUA-Settings processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <EnableLUA>false</EnableLUA> </component> </settings> <settings pass= generalize > <component name= Microsoft-Windows-Security-SPP processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipRearm>1</SkipRearm> </component> </settings> <settings pass= specialize > <component name= Microsoft-Windows-International-Core processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Security-SPP-UX processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipAutoActivation>true</SkipAutoActivation> </component> <component name= Microsoft-Windows-SQMApi processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <CEIPEnabled>0</CEIPEnabled> </component> <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <ComputerName>WindowsVM</ComputerName> <ProductKey>W269N-WFGWX-YVC9B-4J6C9-T83GX</ProductKey> </component> </settings> <settings pass= oobeSystem > <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <AutoLogon> <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Enabled>true</Enabled> <Username>Administrator</Username> </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Home</NetworkLocation> <SkipUserOOBE>true</SkipUserOOBE> <SkipMachineOOBE>true</SkipMachineOOBE> <ProtectYourPC>3</ProtectYourPC> </OOBE> <UserAccounts> <LocalAccounts> <LocalAccount wcm:action= add > <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Description></Description> <DisplayName>Administrator</DisplayName> <Group>Administrators</Group> <Name>Administrator</Name> </LocalAccount> </LocalAccounts> </UserAccounts> <RegisteredOrganization></RegisteredOrganization> <RegisteredOwner>Administrator</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <FirstLogonCommands> <SynchronousCommand wcm:action= add > <Description>Control Panel View</Description> <Order>1</Order> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v StartupPage /t REG_DWORD /d 1 /f</CommandLine> <RequiresUserInput>true</RequiresUserInput> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>2</Order> <Description>Control Panel Icon Size</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v AllItemsIconView /t REG_DWORD /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>3</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /C wmic useraccount where name= Administrator set PasswordExpires=false</CommandLine> <Description>Password Never Expires</Description> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>4</Order> <Description>Remove AutoAdminLogon</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon /v AutoAdminLogon /t REG_SZ /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>5</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /c shutdown /s /f /t 10</CommandLine> <Description>Shuts down the system</Description> </SynchronousCommand> </FirstLogonCommands> <TimeZone>Alaskan Standard Time</TimeZone> </component> </settings> </unattend>---Creating the Pipeline: Let’s create a pipeline which consists of the following tasks. create-source-dv --- create-vm-from-manifest --- wait-for-vmi-status --- cleanup-vm | create-base-dv -- create-source-dv task downloads a Windows source ISO into a PVC called windows-10-source-*. create-base-dv task creates an empty PVC for new windows installation called windows-10-base-*. create-vm-from-manifest task creates a VM called windows-installer-*from the empty PVC and with the windows-10-source-* PVC attached as a CD-ROM. wait-for-vmi-status task waits until the VM shuts down. cleanup-vm deletes the installer VM and ISO PVC. The output artifact will be the windows-10-base-* PVC with the Windows installation. apiVersion: tekton. dev/v1beta1kind: Pipelinemetadata: name: windows-installerspec: params: - name: winImageDownloadURL type: string - name: autounattendConfigMapName default: windows-10-autounattend type: string tasks: - name: create-source-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-source- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 7Gi volumeMode: Filesystem source: http: url: $(params. winImageDownloadURL) - name: waitForSuccess value: 'true' timeout: '2h' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-base-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-base- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeMode: Filesystem source: blank: {} - name: waitForSuccess value: 'true' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-vm-from-manifest params: - name: manifest value: | apiVersion: kubevirt. io/v1alpha3 kind: VirtualMachine metadata: generateName: windows-installer- annotation: description: Windows VM generated by windows-installer pipeline labels: app: windows-installer spec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/domain: windows-installer spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: installcdrom cdrom: bus: sata bootOrder: 1 - name: rootdisk bootOrder: 2 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata - name: sysprepconfig cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: installcdrom - name: rootdisk - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-disk - name: sysprepconfig sysprep: configMap: name: $(params. autounattendConfigMapName) - name: ownDataVolumes value: - installcdrom:$(tasks. create-source-dv. results. name) - name: dataVolumes value: - rootdisk:$(tasks. create-base-dv. results. name) runAfter: - create-source-dv - create-base-dv taskRef: kind: ClusterTask name: create-vm-from-manifest - name: wait-for-vmi-status params: - name: vmiName value: $(tasks. create-vm-from-manifest. results. name) - name: successCondition value: status. phase == Succeeded - name: failureCondition value: status. phase in (Failed, Unknown) runAfter: - create-vm-from-manifest timeout: '2h' taskRef: kind: ClusterTask name: wait-for-vmi-status - name: cleanup-vm params: - name: vmName value: $(tasks. create-vm-from-manifest. results. name) - name: delete value: true runAfter: - wait-for-vmi-status taskRef: kind: ClusterTask name: cleanup-vmRunning the Pipeline: To run the pipeline we need to create the following PipelineRun which references our Pipeline. Before we do that, we should replace DOWNLOAD_URL with the Windows source URL we obtained earlier. The PipelineRun also specifies the serviceAccount names for all the steps/tasks and the timeout for the whole Pipeline. The timeout should be changed appropriately; for example if you have a slow download connection. You can also set a timeout for each task in the Pipeline definition. apiVersion: tekton. dev/v1beta1kind: PipelineRunmetadata: generateName: windows-installer-run-spec: params: - name: winImageDownloadURL value: DOWNLOAD_URL pipelineRef: name: windows-installer timeout: '5h' serviceAccountNames: - taskName: create-source-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-base-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-vm-from-manifest serviceAccountName: create-vm-from-manifest-task - taskName: wait-for-vmi-status serviceAccountName: wait-for-vmi-status-task - taskName: cleanup-vm serviceAccountName: cleanup-vm-taskInspecting the output: Firstly, you can inspect the progress of the windows-10-source and windows-10-base import: kubectl get dvs | grep windows-10-> windows-10-base-8zxwr Succeeded 100. 0% 21s> windows-10-source-jdv64 ImportInProgress 1. 01% 20sTo inspect the status of the pipeline run: kubectl get pipelinerun -l tekton. dev/pipeline=windows-installer > NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME> windows-installer-run-n2mjf Unknown Running 118sTo check the status of each task and its pods: kubectl get pipelinerun -o yaml -l tekton. dev/pipeline=windows-installer kubectl get pods -l tekton. dev/pipeline=windows-installer Once the pipeline run completes, you should be left with a windows-10-base-xxxxx PVC (backed by a DataVolume). You can then create a new VM with a copy of this PVC to test it. You need to replace PVC_NAME with windows-10-base-xxxxx (you can use kubectl get dvs -o name | grep -o windows-10-base-. * ) and PVC_NAMESPACE with the correct namespace in the following YAML. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: windows-10-vmspec: dataVolumeTemplates: - apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: name: windows-10-vm-root spec: pvc: accessModes: - ReadWriteMany resources: requests: storage: 20Gi source: pvc: name: PVC_NAME namespace: PVC_NAMESPACE running: false template: metadata: labels: kubevirt. io/domain: windows-10-vm spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: rootdisk bootOrder: 1 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: rootdisk dataVolume: name: windows-10-vm-root - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-diskYou can start the VM and login with Administrator : changepassword credentials. Then you should be welcomed by your fresh VM. Resources: YAML files used in this example KubeVirt Tekton Tasks Tekton Pipelines" + "body": "Introduction: This blog shows how we can easily automate a process of installing Windows VMs on KubeVirt with Tekton Pipelines. Tekton Pipelines can be used to create a single Pipeline that encapsulates the installation process which can be run and replicated with PipelineRuns. The pipeline will be built with KubeVirt Tekton Tasks, which includes all the necessary tasks for this example. Pipeline Description: The pipeline will prepare an empty Persistent Volume Claim (PVC) and download a Windows source ISO into another PVC. Both of them will be initialized with Containerized Data Importer (CDI). It will then spin up an installation VM and use Windows Answer Files to automatically install the VM. Then the pipeline will wait for the installation to complete and will delete the installation VM while keeping the artifact PVC with the installed operating system. You can later use the artifact PVC as a base image and copy it for new VMs. Prerequisites: KubeVirt v0. 39. 0 Tekton Pipelines v0. 19. 0 KubeVirt Tekton Tasks v0. 3. 0Running Windows Installer Pipeline: Obtaining a URL of Windows Source ISO: First we have to obtain a Download URL of Windows Source ISO. Go to https://www. microsoft. com/en-us/software-download/windows10ISO. You can also obtain a server edition for evaluation at https://www. microsoft. com/en-us/evalcenter/evaluate-windows-server-2019. Fill in the edition and English language (other languages need to be updated in windows-10-autounattend ConfigMap below) and go to the download page. Right-click on the 64-bit download button and copy the download link. The link should be valid for 24 hours. We will need this URL a bit later when running the pipeline. Preparing autounattend. xml ConfigMap: Now we have to prepare our autounattend. xml Answer File with the installation instructions. We will store it in a ConfigMap, but optionally it can be stored in a Secret as well. The configuration file can be generated with Windows SIMor it can be specified manually according to Answer File Referenceand Answer File Components Reference. The following config map includes the required drivers and guest disk configuration. It also specifies how the installation should proceed and what users should be created. In our case it is an Administrator user with changepassword password. You can also change the Answer File according to your needs by consulting the already mentioned documentation. apiVersion: v1kind: ConfigMapmetadata: name: windows-10-autounattenddata: Autounattend. xml: | <?xml version= 1. 0 encoding= utf-8 ?> <unattend xmlns= urn:schemas-microsoft-com:unattend > <settings pass= windowsPE > <component name= Microsoft-Windows-PnpCustomizationsWinPE publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS processorArchitecture= amd64 xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State > <DriverPaths> <PathAndCredentials wcm:action= add wcm:keyValue= 1 > <Path>E:\viostor\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 2 > <Path>E:\NetKVM\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 3 > <Path>E:\viorng\w10\amd64</Path> </PathAndCredentials> </DriverPaths> </component> <component name= Microsoft-Windows-International-Core-WinPE processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SetupUILanguage> <UILanguage>en-US</UILanguage> </SetupUILanguage> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <DiskConfiguration> <Disk wcm:action= add > <CreatePartitions> <CreatePartition wcm:action= add > <Order>1</Order> <Type>Primary</Type> <Size>100</Size> </CreatePartition> <CreatePartition wcm:action= add > <Extend>true</Extend> <Order>2</Order> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>System Reserved</Label> <Order>1</Order> <PartitionID>1</PartitionID> <TypeID>0x27</TypeID> </ModifyPartition> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>OS</Label> <Letter>C</Letter> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> </OSImage> </ImageInstall> <UserData> <AcceptEula>true</AcceptEula> <FullName>Administrator</FullName> <Organization></Organization> <ProductKey> <Key>W269N-WFGWX-YVC9B-4J6C9-T83GX</Key> </ProductKey> </UserData> </component> </settings> <settings pass= offlineServicing > <component name= Microsoft-Windows-LUA-Settings processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <EnableLUA>false</EnableLUA> </component> </settings> <settings pass= generalize > <component name= Microsoft-Windows-Security-SPP processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipRearm>1</SkipRearm> </component> </settings> <settings pass= specialize > <component name= Microsoft-Windows-International-Core processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Security-SPP-UX processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipAutoActivation>true</SkipAutoActivation> </component> <component name= Microsoft-Windows-SQMApi processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <CEIPEnabled>0</CEIPEnabled> </component> <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <ComputerName>WindowsVM</ComputerName> <ProductKey>W269N-WFGWX-YVC9B-4J6C9-T83GX</ProductKey> </component> </settings> <settings pass= oobeSystem > <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <AutoLogon> <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Enabled>true</Enabled> <Username>Administrator</Username> </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Home</NetworkLocation> <SkipUserOOBE>true</SkipUserOOBE> <SkipMachineOOBE>true</SkipMachineOOBE> <ProtectYourPC>3</ProtectYourPC> </OOBE> <UserAccounts> <LocalAccounts> <LocalAccount wcm:action= add > <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Description></Description> <DisplayName>Administrator</DisplayName> <Group>Administrators</Group> <Name>Administrator</Name> </LocalAccount> </LocalAccounts> </UserAccounts> <RegisteredOrganization></RegisteredOrganization> <RegisteredOwner>Administrator</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <FirstLogonCommands> <SynchronousCommand wcm:action= add > <Description>Control Panel View</Description> <Order>1</Order> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v StartupPage /t REG_DWORD /d 1 /f</CommandLine> <RequiresUserInput>true</RequiresUserInput> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>2</Order> <Description>Control Panel Icon Size</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v AllItemsIconView /t REG_DWORD /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>3</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /C wmic useraccount where name= Administrator set PasswordExpires=false</CommandLine> <Description>Password Never Expires</Description> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>4</Order> <Description>Remove AutoAdminLogon</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon /v AutoAdminLogon /t REG_SZ /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>5</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /c shutdown /s /f /t 10</CommandLine> <Description>Shuts down the system</Description> </SynchronousCommand> </FirstLogonCommands> <TimeZone>Alaskan Standard Time</TimeZone> </component> </settings> </unattend>---Creating the Pipeline: Let’s create a pipeline which consists of the following tasks. create-source-dv --- create-vm-from-manifest --- wait-for-vmi-status --- cleanup-vm | create-base-dv -- create-source-dv task downloads a Windows source ISO into a PVC called windows-10-source-*. create-base-dv task creates an empty PVC for new windows installation called windows-10-base-*. create-vm-from-manifest task creates a VM called windows-installer-*from the empty PVC and with the windows-10-source-* PVC attached as a CD-ROM. wait-for-vmi-status task waits until the VM shuts down. cleanup-vm deletes the installer VM and ISO PVC. The output artifact will be the windows-10-base-* PVC with the Windows installation. apiVersion: tekton. dev/v1beta1kind: Pipelinemetadata: name: windows-installerspec: params: - name: winImageDownloadURL type: string - name: autounattendConfigMapName default: windows-10-autounattend type: string tasks: - name: create-source-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-source- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 7Gi volumeMode: Filesystem source: http: url: $(params. winImageDownloadURL) - name: waitForSuccess value: 'true' timeout: '2h' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-base-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-base- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeMode: Filesystem source: blank: {} - name: waitForSuccess value: 'true' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-vm-from-manifest params: - name: manifest value: | apiVersion: kubevirt. io/v1alpha3 kind: VirtualMachine metadata: generateName: windows-installer- annotation: description: Windows VM generated by windows-installer pipeline labels: app: windows-installer spec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/domain: windows-installer spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: installcdrom cdrom: bus: sata bootOrder: 1 - name: rootdisk bootOrder: 2 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata - name: sysprepconfig cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: installcdrom - name: rootdisk - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-disk - name: sysprepconfig sysprep: configMap: name: $(params. autounattendConfigMapName) - name: ownDataVolumes value: - installcdrom:$(tasks. create-source-dv. results. name) - name: dataVolumes value: - rootdisk:$(tasks. create-base-dv. results. name) runAfter: - create-source-dv - create-base-dv taskRef: kind: ClusterTask name: create-vm-from-manifest - name: wait-for-vmi-status params: - name: vmiName value: $(tasks. create-vm-from-manifest. results. name) - name: successCondition value: status. phase == Succeeded - name: failureCondition value: status. phase in (Failed, Unknown) runAfter: - create-vm-from-manifest timeout: '2h' taskRef: kind: ClusterTask name: wait-for-vmi-status - name: cleanup-vm params: - name: vmName value: $(tasks. create-vm-from-manifest. results. name) - name: delete value: true runAfter: - wait-for-vmi-status taskRef: kind: ClusterTask name: cleanup-vmRunning the Pipeline: To run the pipeline we need to create the following PipelineRun which references our Pipeline. Before we do that, we should replace DOWNLOAD_URL with the Windows source URL we obtained earlier. The PipelineRun also specifies the serviceAccount names for all the steps/tasks and the timeout for the whole Pipeline. The timeout should be changed appropriately; for example if you have a slow download connection. You can also set a timeout for each task in the Pipeline definition. apiVersion: tekton. dev/v1beta1kind: PipelineRunmetadata: generateName: windows-installer-run-spec: params: - name: winImageDownloadURL value: DOWNLOAD_URL pipelineRef: name: windows-installer timeout: '5h' serviceAccountNames: - taskName: create-source-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-base-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-vm-from-manifest serviceAccountName: create-vm-from-manifest-task - taskName: wait-for-vmi-status serviceAccountName: wait-for-vmi-status-task - taskName: cleanup-vm serviceAccountName: cleanup-vm-taskInspecting the output: Firstly, you can inspect the progress of the windows-10-source and windows-10-base import: kubectl get dvs | grep windows-10-> windows-10-base-8zxwr Succeeded 100. 0% 21s> windows-10-source-jdv64 ImportInProgress 1. 01% 20sTo inspect the status of the pipeline run: kubectl get pipelinerun -l tekton. dev/pipeline=windows-installer > NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME> windows-installer-run-n2mjf Unknown Running 118sTo check the status of each task and its pods: kubectl get pipelinerun -o yaml -l tekton. dev/pipeline=windows-installer kubectl get pods -l tekton. dev/pipeline=windows-installer Once the pipeline run completes, you should be left with a windows-10-base-xxxxx PVC (backed by a DataVolume). You can then create a new VM with a copy of this PVC to test it. You need to replace PVC_NAME with windows-10-base-xxxxx (you can use kubectl get dvs -o name | grep -o windows-10-base-. * ) and PVC_NAMESPACE with the correct namespace in the following YAML. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: windows-10-vmspec: dataVolumeTemplates: - apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: name: windows-10-vm-root spec: pvc: accessModes: - ReadWriteMany resources: requests: storage: 20Gi source: pvc: name: PVC_NAME namespace: PVC_NAMESPACE runStrategy: Halted template: metadata: labels: kubevirt. io/domain: windows-10-vm spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: rootdisk bootOrder: 1 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: rootdisk dataVolume: name: windows-10-vm-root - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-diskYou can start the VM and login with Administrator : changepassword credentials. Then you should be welcomed by your fresh VM. Resources: YAML files used in this example KubeVirt Tekton Tasks Tekton Pipelines" }, { "id": 43, "url": "/2021/changelog-v0.40.0.html", @@ -664,7 +664,7 @@

    "title": "Monitoring KubeVirt VMs from the inside", "author" : "arthursens", "tags" : "kubevirt, Kubernetes, virtual machine, VM, prometheus, prometheus-operator, node-exporter, monitoring", - "body": "Monitoring KubeVirt VMs from the inside: This blog post will guide you on how to monitor KubeVirt Linux based VirtualMachines with Prometheus node-exporter. Since node_exporter will run inside the VM and expose metrics at an HTTP endpoint, you can use this same guide to expose custom applications that expose metrics in the Prometheus format. Environment: This set of tools will be used on this guide: Helm v3 - To deploy the Prometheus-Operator. minikube - Will provide us a k8s cluster, you are free to choose any other k8s provider though. kubectl - To deploy different k8s resources virtctl - to interact with KubeVirt VirtualMachines, can be downloaded from the KubeVirt repo. Deploy Prometheus Operator: Once you have your k8s cluster, with minikube or any other provider, the first step will be to deploy the Prometheus Operator. The reason is that the KubeVirt CR, when installed on the cluster, will detect if the ServiceMonitor CR already exists. If it does, then it will create ServiceMonitors configured to monitor all the KubeVirt components (virt-controller, virt-api, and virt-handler) out-of-the-box. Although monitoring KubeVirt itself is not covered in this guide, it is a good practice to always deploy the Prometheus Operator before deploying KubeVirt. To deploy the Prometheus Operator, you will need to create its namespace first, e. g. monitoring: kubectl create ns monitoringThen deploy the operator in the new namespace: helm fetch stable/prometheus-operatortar xzf prometheus-operator*. tgzcd prometheus-operator/ && helm install -n monitoring -f values. yaml kubevirt-prometheus stable/prometheus-operatorAfter everything is deployed, you can delete everything that was downloaded by helm: cd . . rm -rf prometheus-operator*One thing to keep in mind is the release name we added here: kubevirt-prometheus. The release name will be used when declaring our ServiceMonitor later on. . Deploy KubeVirt Operators and KubeVirt CustomResources: Alright, the next step will be deploying KubeVirt itself. We will start with its operator. We will fetch the latest version, then use kubectl create to deploy the manifest directly from Github:: export KUBEVIRT_VERSION=$(curl -s https://api. github. com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs)kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlBefore deploying the KubeVirt CR, make sure that all kubevirt-operator replicas are ready, you can do that with: kubectl rollout status -n kubevirt deployment virt-operatorAfter that, we can deploy KubeVirt and wait for all it’s components to get ready in a similar manner: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yamlkubectl rollout status -n kubevirt deployment virt-apikubectl rollout status -n kubevirt deployment virt-controllerkubectl rollout status -n kubevirt daemonset virt-handlerIf we want to monitor VMs that can restart, we want our node-exporter to be persisted and, thus, we need to set up persistent storage for them. CDI will be the component responsible for that, so we will deploy it’s operator and custom resource as well. As always, waiting for the right components to get ready before proceeding: export CDI_VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-operator. yamlkubectl rollout status -n cdi deployment cdi-operatorkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-cr. yamlkubectl rollout status -n cdi deployment cdi-apiserverkubectl rollout status -n cdi deployment cdi-uploadproxykubectl rollout status -n cdi deployment cdi-deploymentDeploying a VirtualMachine with persistent storage: Alright, cool. We have everything we need now. Let’s setup the VM. We will start with the PersistenVolume’s required by CDI’s DataVolume resources. Since I’m using minikube with no dynamic storage provider, I’ll be creating 2 PVs with a reference to the PVCs that will claim them. Notice claimRef in each of the PVs. apiVersion: v1kind: PersistentVolumemetadata: name: example-volumespec: storageClassName: claimRef: namespace: default name: cirros-dv accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume/---apiVersion: v1kind: PersistentVolumemetadata: name: example-volume-scratchspec: storageClassName: claimRef: namespace: default name: cirros-dv-scratch accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume-scratch/With the persistent storage in place, we can create our VM with the following manifest: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: monitorable-vmspec: running: true template: metadata: name: monitorable-vm labels: prometheus. kubevirt. io: node-exporter spec: domain: resources: requests: memory: 1024Mi devices: disks: - disk: bus: virtio name: my-data-volume volumes: - dataVolume: name: cirros-dv name: my-data-volume dataVolumeTemplates: - metadata: name: cirros-dv spec: source: http: url: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img pvc: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Notice that KubeVirt’s VirtualMachine resource has a VirtualMachine template and a dataVolumeTemplate. On the VirtualMachine template, it is important noticing that we named our VM monitorable-vm, and we will use this name to connect to its console with virtctl later on. The label we’ve added, prometheus. kubevirt. io: node-exporter , is also important, since we’ll use it when configuring Prometheus to scrape the VM’s node-exporter On dataVolumeTemplate, it is important noticing that we named the PVC cirros-dv and the DataVolume resource will create 2 PVCs with that, cirros-dv and cirros-dv-scratch. Notice that cirros-dv and cirros-dv-scratch are the names referenced on our PersistentVolume manifests. The names must match for this to work. Installing the node-exporter inside the VM: Once the VirtualMachineInstance is running, we can connect to its console using virtctl console monitorable-vm. If user and password are required, provide your credentials accordingly. If you are using the same disk image from this guide, the user and password are cirros and gocubsgo respectively. The following script will install node-exporter and configure the VM to always start the exporter when booting: curl -LO -k https://github. com/prometheus/node_exporter/releases/download/v1. 0. 1/node_exporter-1. 0. 1. linux-amd64. tar. gzgunzip -c node_exporter-1. 0. 1. linux-amd64. tar. gz | tar xopf -. /node_exporter-1. 0. 1. linux-amd64/node_exporter &sudo /bin/sh -c 'cat > /etc/rc. local <<EOF#!/bin/shecho Starting up node_exporter at :9100! /home/cirros/node_exporter-1. 0. 1. linux-amd64/node_exporter 2>&1 > /dev/null &EOF'sudo chmod +x /etc/rc. localP. S. : If you are using a different base image, please configure node-exporter to start at boot time accordingly Configuring Prometheus to scrape the VM’s node-exporter: To configure Prometheus to scrape the node-exporter (or other applications) is really simple. All we need is to create a new Service and a ServiceMonitor: apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter ---apiVersion: monitoring. coreos. com/v1kind: ServiceMonitormetadata: name: kubevirt-node-exporters-servicemonitor namespace: monitoring labels: prometheus. kubevirt. io: node-exporter release: monitoringspec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sLet’s break this down just to make sure we set up everything right. Starting with the Service: spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter On the specification, we are creating a new port named metrics that will be redirected to every pod labeled with prometheus. kubevirt. io: node-exporter , at port 9100, which is the default port number for the node-exporter. apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter We are also labeling the Service itself with prometheus. kubevirt. io: node-exporter , that will be used by the ServiceMonitor object. Now let’s take a look at our ServiceMonitor specification: spec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sSince our ServiceMonitor will be deployed at the monitoring namespace, but our service is at the default namespace, we need namespaceSelector. any=true. We are also telling our ServiceMonitor that Prometheus needs to scrape endpoints from services labeled with prometheus. kubevirt. io: node-exporter and which ports are named metrics. Luckily, that’s exactly what we did with our Service! One last thing to keep an eye on. Prometheus configuration can be set up to watch multiple ServiceMonitors. We can see which ServiceMonitors our Prometheus is watching with the following command: # Look for Service Monitor Selectorkubectl describe -n monitoring prometheuses. monitoring. coreos. com monitoring-prometheus-oper-prometheusMake sure our ServiceMonitor has all labels required by Prometheus’ Service Monitor Selector. One common selector is the release name that we’ve set when deploying our Prometheus with helm! Testing: You can do a quick test by port-forwarding Prometheus web UI and executing some PromQL: kubectl port-forward -n monitoring prometheus-monitoring-prometheus-oper-prometheus-0 9090:9090To make sure everything is working, access localhost:9090/graph and execute the PromQL up{pod=~ virt-launcher. * }. Prometheus should return data that is being collected from monitorable-vm’s node-exporter. You can play around with virtctl, stop and starting the VM to see how the metrics behave. You will notice that when stopping the VM with virtctl stop monitorable-vm, the VirtualMachineInstance is killed and, thus, so is it’s pod. This will result with our service not being able to find the pod’s endpoint and then it will be removed from Prometheus’ targets. With this behavior, alerts like the one below won’t work since our target is literally gone, not down. - alert: KubeVirtVMDown expr: up{pod=~ virt-launcher. * } == 0 for: 1m labels: severity: warning annotations: summary: KubeVirt VM {{ $labels. pod }} is down. BUT, if the VM is constantly crashing without being stopped, the pod won’t be killed and the target will still be monitored. Node-exporter will never start or will go down constantly alongside the VM, so an alert like this might work: - alert: KubeVirtVMCrashing expr: up{pod=~ virt-launcher. * } == 0 for: 5m labels: severity: critical annotations: summary: KubeVirt VM {{ $labels. pod }} is constantly crashing before node-exporter starts at boot. Conclusion: In this blog post we used node-exporter to expose metrics out of a KubeVirt VM. We also configured Prometheus Operator to collect these metrics. This illustrates how to bring Kubernetes monitoring best practices with applications running inside KubeVirt VMs. " + "body": "Monitoring KubeVirt VMs from the inside: This blog post will guide you on how to monitor KubeVirt Linux based VirtualMachines with Prometheus node-exporter. Since node_exporter will run inside the VM and expose metrics at an HTTP endpoint, you can use this same guide to expose custom applications that expose metrics in the Prometheus format. Environment: This set of tools will be used on this guide: Helm v3 - To deploy the Prometheus-Operator. minikube - Will provide us a k8s cluster, you are free to choose any other k8s provider though. kubectl - To deploy different k8s resources virtctl - to interact with KubeVirt VirtualMachines, can be downloaded from the KubeVirt repo. Deploy Prometheus Operator: Once you have your k8s cluster, with minikube or any other provider, the first step will be to deploy the Prometheus Operator. The reason is that the KubeVirt CR, when installed on the cluster, will detect if the ServiceMonitor CR already exists. If it does, then it will create ServiceMonitors configured to monitor all the KubeVirt components (virt-controller, virt-api, and virt-handler) out-of-the-box. Although monitoring KubeVirt itself is not covered in this guide, it is a good practice to always deploy the Prometheus Operator before deploying KubeVirt. To deploy the Prometheus Operator, you will need to create its namespace first, e. g. monitoring: kubectl create ns monitoringThen deploy the operator in the new namespace: helm fetch stable/prometheus-operatortar xzf prometheus-operator*. tgzcd prometheus-operator/ && helm install -n monitoring -f values. yaml kubevirt-prometheus stable/prometheus-operatorAfter everything is deployed, you can delete everything that was downloaded by helm: cd . . rm -rf prometheus-operator*One thing to keep in mind is the release name we added here: kubevirt-prometheus. The release name will be used when declaring our ServiceMonitor later on. . Deploy KubeVirt Operators and KubeVirt CustomResources: Alright, the next step will be deploying KubeVirt itself. We will start with its operator. We will fetch the latest version, then use kubectl create to deploy the manifest directly from Github:: export KUBEVIRT_VERSION=$(curl -s https://api. github. com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs)kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlBefore deploying the KubeVirt CR, make sure that all kubevirt-operator replicas are ready, you can do that with: kubectl rollout status -n kubevirt deployment virt-operatorAfter that, we can deploy KubeVirt and wait for all it’s components to get ready in a similar manner: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yamlkubectl rollout status -n kubevirt deployment virt-apikubectl rollout status -n kubevirt deployment virt-controllerkubectl rollout status -n kubevirt daemonset virt-handlerIf we want to monitor VMs that can restart, we want our node-exporter to be persisted and, thus, we need to set up persistent storage for them. CDI will be the component responsible for that, so we will deploy it’s operator and custom resource as well. As always, waiting for the right components to get ready before proceeding: export CDI_VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-operator. yamlkubectl rollout status -n cdi deployment cdi-operatorkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-cr. yamlkubectl rollout status -n cdi deployment cdi-apiserverkubectl rollout status -n cdi deployment cdi-uploadproxykubectl rollout status -n cdi deployment cdi-deploymentDeploying a VirtualMachine with persistent storage: Alright, cool. We have everything we need now. Let’s setup the VM. We will start with the PersistenVolume’s required by CDI’s DataVolume resources. Since I’m using minikube with no dynamic storage provider, I’ll be creating 2 PVs with a reference to the PVCs that will claim them. Notice claimRef in each of the PVs. apiVersion: v1kind: PersistentVolumemetadata: name: example-volumespec: storageClassName: claimRef: namespace: default name: cirros-dv accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume/---apiVersion: v1kind: PersistentVolumemetadata: name: example-volume-scratchspec: storageClassName: claimRef: namespace: default name: cirros-dv-scratch accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume-scratch/With the persistent storage in place, we can create our VM with the following manifest: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: monitorable-vmspec: runStrategy: Always template: metadata: name: monitorable-vm labels: prometheus. kubevirt. io: node-exporter spec: domain: resources: requests: memory: 1024Mi devices: disks: - disk: bus: virtio name: my-data-volume volumes: - dataVolume: name: cirros-dv name: my-data-volume dataVolumeTemplates: - metadata: name: cirros-dv spec: source: http: url: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img pvc: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Notice that KubeVirt’s VirtualMachine resource has a VirtualMachine template and a dataVolumeTemplate. On the VirtualMachine template, it is important noticing that we named our VM monitorable-vm, and we will use this name to connect to its console with virtctl later on. The label we’ve added, prometheus. kubevirt. io: node-exporter , is also important, since we’ll use it when configuring Prometheus to scrape the VM’s node-exporter On dataVolumeTemplate, it is important noticing that we named the PVC cirros-dv and the DataVolume resource will create 2 PVCs with that, cirros-dv and cirros-dv-scratch. Notice that cirros-dv and cirros-dv-scratch are the names referenced on our PersistentVolume manifests. The names must match for this to work. Installing the node-exporter inside the VM: Once the VirtualMachineInstance is running, we can connect to its console using virtctl console monitorable-vm. If user and password are required, provide your credentials accordingly. If you are using the same disk image from this guide, the user and password are cirros and gocubsgo respectively. The following script will install node-exporter and configure the VM to always start the exporter when booting: curl -LO -k https://github. com/prometheus/node_exporter/releases/download/v1. 0. 1/node_exporter-1. 0. 1. linux-amd64. tar. gzgunzip -c node_exporter-1. 0. 1. linux-amd64. tar. gz | tar xopf -. /node_exporter-1. 0. 1. linux-amd64/node_exporter &sudo /bin/sh -c 'cat > /etc/rc. local <<EOF#!/bin/shecho Starting up node_exporter at :9100! /home/cirros/node_exporter-1. 0. 1. linux-amd64/node_exporter 2>&1 > /dev/null &EOF'sudo chmod +x /etc/rc. localP. S. : If you are using a different base image, please configure node-exporter to start at boot time accordingly Configuring Prometheus to scrape the VM’s node-exporter: To configure Prometheus to scrape the node-exporter (or other applications) is really simple. All we need is to create a new Service and a ServiceMonitor: apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter ---apiVersion: monitoring. coreos. com/v1kind: ServiceMonitormetadata: name: kubevirt-node-exporters-servicemonitor namespace: monitoring labels: prometheus. kubevirt. io: node-exporter release: monitoringspec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sLet’s break this down just to make sure we set up everything right. Starting with the Service: spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter On the specification, we are creating a new port named metrics that will be redirected to every pod labeled with prometheus. kubevirt. io: node-exporter , at port 9100, which is the default port number for the node-exporter. apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter We are also labeling the Service itself with prometheus. kubevirt. io: node-exporter , that will be used by the ServiceMonitor object. Now let’s take a look at our ServiceMonitor specification: spec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sSince our ServiceMonitor will be deployed at the monitoring namespace, but our service is at the default namespace, we need namespaceSelector. any=true. We are also telling our ServiceMonitor that Prometheus needs to scrape endpoints from services labeled with prometheus. kubevirt. io: node-exporter and which ports are named metrics. Luckily, that’s exactly what we did with our Service! One last thing to keep an eye on. Prometheus configuration can be set up to watch multiple ServiceMonitors. We can see which ServiceMonitors our Prometheus is watching with the following command: # Look for Service Monitor Selectorkubectl describe -n monitoring prometheuses. monitoring. coreos. com monitoring-prometheus-oper-prometheusMake sure our ServiceMonitor has all labels required by Prometheus’ Service Monitor Selector. One common selector is the release name that we’ve set when deploying our Prometheus with helm! Testing: You can do a quick test by port-forwarding Prometheus web UI and executing some PromQL: kubectl port-forward -n monitoring prometheus-monitoring-prometheus-oper-prometheus-0 9090:9090To make sure everything is working, access localhost:9090/graph and execute the PromQL up{pod=~ virt-launcher. * }. Prometheus should return data that is being collected from monitorable-vm’s node-exporter. You can play around with virtctl, stop and starting the VM to see how the metrics behave. You will notice that when stopping the VM with virtctl stop monitorable-vm, the VirtualMachineInstance is killed and, thus, so is it’s pod. This will result with our service not being able to find the pod’s endpoint and then it will be removed from Prometheus’ targets. With this behavior, alerts like the one below won’t work since our target is literally gone, not down. - alert: KubeVirtVMDown expr: up{pod=~ virt-launcher. * } == 0 for: 1m labels: severity: warning annotations: summary: KubeVirt VM {{ $labels. pod }} is down. BUT, if the VM is constantly crashing without being stopped, the pod won’t be killed and the target will still be monitored. Node-exporter will never start or will go down constantly alongside the VM, so an alert like this might work: - alert: KubeVirtVMCrashing expr: up{pod=~ virt-launcher. * } == 0 for: 5m labels: severity: critical annotations: summary: KubeVirt VM {{ $labels. pod }} is constantly crashing before node-exporter starts at boot. Conclusion: In this blog post we used node-exporter to expose metrics out of a KubeVirt VM. We also configured Prometheus Operator to collect these metrics. This illustrates how to bring Kubernetes monitoring best practices with applications running inside KubeVirt VMs. " }, { "id": 51, "url": "/2020/Customizing-images-for-containerized-vms.html", @@ -678,7 +678,7 @@

    "title": "High Availability -- RunStrategies for Virtual Machines", "author" : "Stu Gott", "tags" : "kubevirt, Kubernetes, virtual machine, VM", - "body": "Why Isn’t My VM Running?: There’s been a longstanding point of confusion in KubeVirt’s API. One that was raised yet again a few times recently. The confusion stems from the “Running” field of the VM spec. Language has meaning. It’s natural to take it at face value that “Running” means “Running”, right? Well, not so fast. Spec vs Status: KubeVirt objects follow Kubernetes convention in that they generally have Spec and Status stanzas. The Spec is user configurable and allows the user to indicate the desired state of the cluster in a declarative manner. Meanwhile status sections are not user configurable and reflect the actual state of things in the cluster. In short, users edit the Spec and controllers edit the Status. So back to the Running field. In this case the Running field is in the VM’s Spec. In other words it’s the user’s intent that the VM is running. It doesn’t reflect the actual running state of the VM. RunStrategy: There’s a flip side to the above, equally as confusing: “Running” isn’t always what the user wants. If a user logs into a VM and shuts it down from inside the guest, KubeVirt will dutifully re-spawn it! There certainly exist high availability use cases where that’s exactly the correct reaction, but in most cases that’s just plain confusing. Shutdown is not restart! We decided to tackle both issues at the same time–by deprecating the “Running” field. As already noted, we could have picked a better name to begin with. By using the name “RunStrategy”, it should hopefully be more clear to the end user that they’re asking for a state, which is of course completely separate from what the system can actually provide. While RunStrategy helps address the nomenclature confusion, it also happens to be an enumerated value. Since Running is a boolean, it can only be true or false. We’re now able to create more meaningful states to accommodate different use cases. Four RunStrategies currently exist: Always: If a VM is stopped for any reason, a new instance will be spawned. RerunOnFailure: If a VM ends execution in an error state, a new instance will be spawned. This addressed the second concern listed above. If a user halts a VM manually a new instance will not be spawned. Manual: This is exactly what it means. KubeVirt will neither attempt to start or stop a VM. In order to change state, the user must invoke start/stop/restart from the API. There exist convenience functions in the virtctl command line client as well. Halted: The VM will be stopped if it’s running, and will remain off. An example using the RerunOnFailure RunStrategy was presented in KubeVirt VM Image Usage Patterns High Availability: No discussion of RunStrategies is complete without mentioning High Availability. After all, the implication behind the RerunOnFailure and Always RunStrategies is that your VM should always be available. For the most part this is completely true, but there’s one important scenario where there’s a gap to be aware of: if a node fails completely, e. g. loss of networking or power. Without some means of automatic detection that the node is no longer active, KubeVirt won’t know that the VM has failed. On OpenShift clusters installed using Installer Provisioned Infrastructure (IPI) with MachineHealthCheck enabled can detect failed nodes and reschedule workloads running there. Mode information on IPI and MHC can be found here: Installer Provisioned InfrastructureMachine Health Check " + "body": "Why Isn’t My VM Running?: There’s been a longstanding point of confusion in KubeVirt’s API. One that was raised yet again a few times recently. The confusion stems from the “Running” field of the VM spec. Language has meaning. It’s natural to take it at face value that “Running” means “Running”, right? Well, not so fast. Spec vs Status: KubeVirt objects follow Kubernetes convention in that they generally have Spec and Status stanzas. The Spec is user configurable and allows the user to indicate the desired state of the cluster in a declarative manner. Meanwhile status sections are not user configurable and reflect the actual state of things in the cluster. In short, users edit the Spec and controllers edit the Status. So back to the Running field. In this case the Running field is in the VM’s Spec. In other words it’s the user’s intent that the VM is running. It doesn’t reflect the actual running state of the VM. RunStrategy: There’s a flip side to the above, equally as confusing: “Running” isn’t always what the user wants. If a user logs into a VM and shuts it down from inside the guest, KubeVirt will dutifully re-spawn it! There certainly exist high availability use cases where that’s exactly the correct reaction, but in most cases that’s just plain confusing. Shutdown is not restart! We decided to tackle both issues at the same time–by deprecating the “Running” field. As already noted, we could have picked a better name to begin with. By using the name “RunStrategy”, it should hopefully be more clear to the end user that they’re asking for a state, which is of course completely separate from what the system can actually provide. While RunStrategy helps address the nomenclature confusion, it also happens to be an enumerated value. Since Running is a boolean, it can only be true or false. We’re now able to create more meaningful states to accommodate different use cases. Four RunStrategies currently exist: Always: If a VM is stopped for any reason, a new instance will be spawned. RerunOnFailure: If a VM ends execution in an error state, a new instance will be spawned. This addressed the second concern listed above. If a user halts a VM manually a new instance will not be spawned. Once: The VM will run once and not be restarted upon completion regardless if the completion is of phase Failure or Success. Manual: This is exactly what it means. KubeVirt will neither attempt to start or stop a VM. In order to change state, the user must invoke start/stop/restart from the API. There exist convenience functions in the virtctl command line client as well. Halted: The VM will be stopped if it’s running, and will remain off. An example using the RerunOnFailure RunStrategy was presented in KubeVirt VM Image Usage Patterns High Availability: No discussion of RunStrategies is complete without mentioning High Availability. After all, the implication behind the RerunOnFailure and Always RunStrategies is that your VM should always be available. For the most part this is completely true, but there’s one important scenario where there’s a gap to be aware of: if a node fails completely, e. g. loss of networking or power. Without some means of automatic detection that the node is no longer active, KubeVirt won’t know that the VM has failed. On OpenShift clusters installed using Installer Provisioned Infrastructure (IPI) with MachineHealthCheck enabled can detect failed nodes and reschedule workloads running there. Mode information on IPI and MHC can be found here: Installer Provisioned InfrastructureMachine Health Check " }, { "id": 53, "url": "/2020/changelog-v0.35.0.html", @@ -692,7 +692,7 @@

    "title": "Multiple Network Attachments with bridge CNI", "author" : "ellorent", "tags" : "kubevirt-hyperconverged, cnao, cluster-network-addons-operator, kubernetes-nmstate, nmstate, bridge, multus, networking, CNI, multiple networks", - "body": "Introduction: Over the last years the KubeVirt project has improved a lot regarding secondary interfaces networking configuration. Now it’s possible to do an end to end configuration from host networking to a VM using just the Kubernetes API withspecial Custom Resource Definitions. Moreover, the deployment of all the projects has been simplified by introducing KubeVirt hyperconverged cluster operator (HCO) and cluster network addons operator (CNAO) to install the networking components. The following is the operator hierarchy list presenting the deployment responsibilities of the HCO and CNAO operators used in this blog post: kubevirt-hyperconverged-cluster-operator (HCO) cluster-network-addons-operator (CNAO) multus bridge-cni kubemacpool kubernetes-nmstate KubeVirt Introducing cluster-network-addons-operator: The cluster network addons operator manages the lifecycle (deploy/update/delete) of different Kubernetes network components needed toconfigure secondary interfaces, manage MAC addresses and defines networking on hosts for pods and VMs. A Good thing about having an operator is that everything is done through the API and you don’t have to go over all nodes to install these components yourself and assures smooth updates. In this blog post we are going to use the following components, explained in a greater detail later on: multus: to start a secondary interface on containers in pods linux bridge CNI: to use bridge CNI and connect the secondary interfaces from pods to a linux bridge at nodes kubemacpool: to manage mac addresses kubernetes-nmstate: to configure the linux bridge on the nodesThe list of components we want CNAO to deploy is specified by the NetworkAddonsConfig Custom Resource (CR) and the progress of the installation appears in the CR status field, split per component. To inspectthis progress we can query the CR status with the following command: kubectl get NetworkAddonsConfig cluster -o yamlTo simplify this blog post we are going to use directly the NetworkAddonsConfig from HCO, which by default installs all the network components, but just to illustrate CNAO configuration, the following is a NetworkAddonsConfig CR instructing to deploy multus, linuxBridge, nmstate and kubemacpool components: apiVersion: networkaddonsoperator. network. kubevirt. io/v1kind: NetworkAddonsConfigmetadata: name: clusterspec: multus: {} linuxBridge: {} nmstate: {} imagePullPolicy: AlwaysConnecting Pods, VMs and Nodes over a single secondary network with bridge CNI: Although Kubernetes provides a default interface that gives connectivity to pods and VMs, it’s not easy to configure which NIC should be used for specific pods or VMs in a multi NIC node cluster. A Typical use case is to split control/traffic planes isolated by different NICs on nodes. With linux bridge CNI + multus it’s possible to create a secondary NIC in pod containers and attach it to a L2 linux bridge on nodes. This will add container’s connectivity to a specific NIC on nodes if that NIC is part of the L2 linux bridge. To ensure the configuration is applied only in pods on nodes that have the bridge, the k8s. v1. cni. cncf. io/resourceName label is added. This goes hand in hand with another component, bridge-marker which inspects nodes networking and if a new bridge pops up it will mark the node status with it. This is an example of the results from bridge-marker on nodes where bridge br0 is already configured: ---status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kThis is an example of NetworkAttachmentDefinition to expose the bridge available on the host to users: apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: bridge-network annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-l2 , plugins : [{ type : bridge , bridge : br0 , ipam : {} }] }Then adding the bridge secondary network to a pod is a matter of adding the following annotation toit: annotations: k8s. v1. cni. cncf. io/networks: bridge-networkSetting up node networking with NodeNetworkConfigurationPolicy (aka nncp): Changing Kubernetes cluster node networking can be done manually iterating over all the cluster nodes and making changes or using different automatization tools like ansible. However, using just another Kubernetes resource is more convenient. For this purpose the kubernetes-nmstate project was born as a cluster wide node network administrator based on Kubernetes CRs on top of nmstate. It works as a Kubernetes DaemonSet running pods on all the cluster nodes and reconciling three different CRs: NodeNetworkConfigurationPolicy to specify cluster node network desired configuration NodeNetworkConfigurationEnactment (nnce) to troubleshoot issues with nncp NodeNetworkState (nns) to view the node’s networking configurationNote Project kubernetes-nmstate has a distributed architecture to reduce kube-apiserver connectivity dependency, this means that every pod will configure the networking on the node that it’s running without much interaction with kube-apiserver. In case something goes wrong and the pod changing the node network cannot ping the default gateway, resolve DNS root servers or has lost the kube-apiserver connectivity it will rollback to the previous configuration to go back to a working state. Those errors can be checked by running kubectl get nnce. The command displays potential issues per node and nncp. The desired state fields follow the nmstate API described at their awesome doc Also for more details on kubernetes-nmstate there are guides covering reporting, configuration and troubleshooting. There are also nncp examples. Demo: mixing it all together, VM to VM communication between nodes: With the following recipe we will end up with a pair of virtual machines pair on two different nodes with one secondary NICs, eth1 at vlan 100. They will be connected to each other usingthe same bridge on nodes that also have the external secondary NIC eth1 connected. Demo environment setup: We are going to use a kubevirtci as Kubernetes ephemeral cluster provider. To start it up with two nodes and one secondary NIC and install NetworkManager >= 1. 22 (needed for kubernetes-nmstate) and dnsmasq follow these steps: git clone https://github. com/kubevirt/kubevirtcicd kubevirtci# Pin to version working with blog post steps in case# k8s-1. 19 provider disappear in the futuregit reset d5d8e3e376b4c3b45824fbfe320b4c5175b37171 --hardexport KUBEVIRT_PROVIDER=k8s-1. 19export KUBEVIRT_NUM_NODES=2export KUBEVIRT_NUM_SECONDARY_NICS=1make cluster-upexport KUBECONFIG=$(. /cluster-up/kubeconfig. sh)Installing components: To install KubeVirt we are going to use the operator kubevirt-hyper-converged-operator, this will install all the componentsneeded to have a functional KubeVirt with all the features including the ones we are going to use: multus, linux-bridge, kubemacpool and kubernetes-nmstate. curl https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/master/deploy/deploy. sh | bashkubectl wait hco -n kubevirt-hyperconverged kubevirt-hyperconverged --for condition=Available --timeout=500sNow we have a Kubernetes cluster with all the pieces to startup a VM with bridge attached to a secondary NIC. Creating the br0 on nodes with a port attached to secondary NIC eth1: First step is to create a L2 linux-bridge at nodes with one port on the secondary NIC eth1, this will beused later on by the bridge CNI. cat <<EOF | kubectl apply -f -apiVersion: nmstate. io/v1alpha1kind: NodeNetworkConfigurationPolicymetadata: name: br0-eth1spec: desiredState: interfaces: - name: br0 description: Linux bridge with eth1 as a port type: linux-bridge state: up bridge: options: stp: enabled: false port: - name: eth1EOFNow we wait for the bridge to be created checking nncp conditions: kubectl wait nncp br0-eth1 --for condition=Available --timeout 2mAfter the nncp becomes available, we can query the nncp resources in the clusterand see it listed with successful status. kubectl get nncpNAME STATUSbr0-eth1 SuccessfullyConfiguredWe can inspect the status of applying the policy to each node. For that there is the NodeNetworkConfigurationEnactment CR (nnce): kubectl get nnceNAME STATUSnode01. br0-eth1 SuccessfullyConfigurednode02. br0-eth1 SuccessfullyConfiguredNote In case of errors it is possible to retrieve the error dumped by nmstate runningkubectl get nnce -o yaml the status will contain the error. We can also inspect the network state on the nodes by retrieving the NodeNetworkState andchecking if the bridge br0 is up using jsonpath kubectl get nns node01 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'kubectl get nns node02 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'When inspecting the full currentState yaml we get the followinginterface configuration: kubectl get nns node01 -o yamlstatus: currentState: interfaces: - bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: eth1 stp-hairpin-mode: false stp-path-cost: 100 stp-priority: 32 description: Linux bridge with eth1 as a port ipv4: dhcp: false enabled: false ipv6: autoconf: false dhcp: false enabled: false mac-address: 52:55:00:D1:56:00 mtu: 1500 name: br0 state: up type: linux-bridgeWe can also check that the bridge-marker is working and check verify on nodes: kubectl get node node01 -o yamlThe following should appear stating that br0can be consumed on the node: status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kAt this point we have an L2 linux bridge ready and connected to NIC eth1. Configure network attachment with a L2 bridge and a vlan: In order to make the bridge a L2 bridge, we specify no IPAM (IP Address Management) since we arenot going to configure any ip address for the bridge. To configurebridge vlan-filtering we add the vlan we want to use to isolate our VMs: cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: br0-100-l2 annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-100-l2-config , plugins : [ { type : bridge , bridge : br0 , vlan : 100, ipam : {} }, { type : tuning } ] }EOFStart a pair of VMs on different nodes using the multus configuration to connect a secondary interfaces to br0: Now it’s time to startup the VMs running on different nodes so we can check external connectivity ofbr0. They will also have a secondary NIC eth1 to connect to the other VM running at different node, so they goover the br0 at nodes. The following picture illustrates the cluster: bridgecluster_kubevirtcikubevirtci clustercluster_node01node01cluster_vmavmacluster_node02node02cluster_vmbvmbnd_br1_kubevirtcibr1nd_br0_node01br0nd_eth1_node01eth1nd_br0_node01--nd_eth1_node01nd_eth1_vmaeth1nd_br0_node01--nd_eth1_vmand_eth1_node01--nd_br1_kubevirtcind_br0_node02br0nd_eth1_node02eth1nd_br0_node02--nd_eth1_node02nd_eth1_vmbeth1nd_br0_node02--nd_eth1_vmbnd_eth1_node02--nd_br1_kubevirtciFirst step is to install the virtctl command line tool to play with virtual machines: curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/v0. 33. 0/virtctl-v0. 33. 0-linux-amd64chmod +x virtctlsudo install virtctl /usr/local/binNow let’s create two VirtualMachines on each node. They will have one secondary NIC connected to br0 using the multus configuration for vlan 100. We will also activate kubemacpool to be sure that mac addresses are unique in the cluster and install the qemu-guest-agent so IP addresses from secondary NICs are reported to VM and we can inspect them later on. cat <<EOF | kubectl apply -f -apiVersion: v1kind: Namespacemetadata: name: default labels: mutatevirtualmachines. kubemacpool. io: allocate---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmaspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: node01 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 1/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agent---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmbspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: node02 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 2/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agentEOFWait for the two VMs to be ready. Eventually you will see something like this: kubectl get vmiNAME AGE PHASE IP NODENAMEvma 2m4s Running 10. 244. 196. 142 node01vmb 2m4s Running 10. 244. 140. 86 node02We can check that they have one secondary NIC withoutaddress assigned: kubectl get vmi -o yaml## vma interfaces: - interfaceName: eth0 ipAddress: 10. 244. 196. 144 ipAddresses: - 10. 244. 196. 144 - fd10:244::c48f mac: 02:4a:be:00:00:0a name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 1/24 ipAddresses: - 10. 200. 0. 1/24 - fe80::4a:beff:fe00:b/64 mac: 02:4a:be:00:00:0b name: br0-100## vmb interfaces: - interfaceName: eth0 ipAddress: 10. 244. 140. 84 ipAddresses: - 10. 244. 140. 84 - fd10:244::8c53 mac: 02:4a:be:00:00:0e name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 2/24 ipAddresses: - 10. 200. 0. 2/24 - fe80::4a:beff:fe00:f/64 mac: 02:4a:be:00:00:0f name: br0-100Let’s finish this section by verifying connectivity between vma and vmb using ping. Open the console of vma virtual machine and use ping command with destination IP address 10. 200. 0. 2, which is the address assigned to the secondary interface of vmb: Note The user and password for this VMs is fedora, it was configured at cloudinit userData virtctl console vmaping 10. 200. 0. 2 -c 3PING 10. 200. 0. 2 (10. 200. 0. 2): 56 data bytes64 bytes from 10. 200. 0. 2: seq=0 ttl=50 time=357. 040 ms64 bytes from 10. 200. 0. 2: seq=1 ttl=50 time=379. 742 ms64 bytes from 10. 200. 0. 2: seq=2 ttl=50 time=404. 066 ms--- 10. 200. 0. 2 ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 357. 040/380. 282/404. 066 msConclusion: In this blog post we used network components from KubeVirt project to connect two VMs on different nodesthrough a linux bridge connected to a secondary NIC. This illustrates how VM traffic can be directed to a specific NICon a node using a secondary NIC on a VM. " + "body": "Introduction: Over the last years the KubeVirt project has improved a lot regarding secondary interfaces networking configuration. Now it’s possible to do an end to end configuration from host networking to a VM using just the Kubernetes API withspecial Custom Resource Definitions. Moreover, the deployment of all the projects has been simplified by introducing KubeVirt hyperconverged cluster operator (HCO) and cluster network addons operator (CNAO) to install the networking components. The following is the operator hierarchy list presenting the deployment responsibilities of the HCO and CNAO operators used in this blog post: kubevirt-hyperconverged-cluster-operator (HCO) cluster-network-addons-operator (CNAO) multus bridge-cni kubemacpool kubernetes-nmstate KubeVirt Introducing cluster-network-addons-operator: The cluster network addons operator manages the lifecycle (deploy/update/delete) of different Kubernetes network components needed toconfigure secondary interfaces, manage MAC addresses and defines networking on hosts for pods and VMs. A Good thing about having an operator is that everything is done through the API and you don’t have to go over all nodes to install these components yourself and assures smooth updates. In this blog post we are going to use the following components, explained in a greater detail later on: multus: to start a secondary interface on containers in pods linux bridge CNI: to use bridge CNI and connect the secondary interfaces from pods to a linux bridge at nodes kubemacpool: to manage mac addresses kubernetes-nmstate: to configure the linux bridge on the nodesThe list of components we want CNAO to deploy is specified by the NetworkAddonsConfig Custom Resource (CR) and the progress of the installation appears in the CR status field, split per component. To inspectthis progress we can query the CR status with the following command: kubectl get NetworkAddonsConfig cluster -o yamlTo simplify this blog post we are going to use directly the NetworkAddonsConfig from HCO, which by default installs all the network components, but just to illustrate CNAO configuration, the following is a NetworkAddonsConfig CR instructing to deploy multus, linuxBridge, nmstate and kubemacpool components: apiVersion: networkaddonsoperator. network. kubevirt. io/v1kind: NetworkAddonsConfigmetadata: name: clusterspec: multus: {} linuxBridge: {} nmstate: {} imagePullPolicy: AlwaysConnecting Pods, VMs and Nodes over a single secondary network with bridge CNI: Although Kubernetes provides a default interface that gives connectivity to pods and VMs, it’s not easy to configure which NIC should be used for specific pods or VMs in a multi NIC node cluster. A Typical use case is to split control/traffic planes isolated by different NICs on nodes. With linux bridge CNI + multus it’s possible to create a secondary NIC in pod containers and attach it to a L2 linux bridge on nodes. This will add container’s connectivity to a specific NIC on nodes if that NIC is part of the L2 linux bridge. To ensure the configuration is applied only in pods on nodes that have the bridge, the k8s. v1. cni. cncf. io/resourceName label is added. This goes hand in hand with another component, bridge-marker which inspects nodes networking and if a new bridge pops up it will mark the node status with it. This is an example of the results from bridge-marker on nodes where bridge br0 is already configured: ---status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kThis is an example of NetworkAttachmentDefinition to expose the bridge available on the host to users: apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: bridge-network annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-l2 , plugins : [{ type : bridge , bridge : br0 , ipam : {} }] }Then adding the bridge secondary network to a pod is a matter of adding the following annotation toit: annotations: k8s. v1. cni. cncf. io/networks: bridge-networkSetting up node networking with NodeNetworkConfigurationPolicy (aka nncp): Changing Kubernetes cluster node networking can be done manually iterating over all the cluster nodes and making changes or using different automatization tools like ansible. However, using just another Kubernetes resource is more convenient. For this purpose the kubernetes-nmstate project was born as a cluster wide node network administrator based on Kubernetes CRs on top of nmstate. It works as a Kubernetes DaemonSet running pods on all the cluster nodes and reconciling three different CRs: NodeNetworkConfigurationPolicy to specify cluster node network desired configuration NodeNetworkConfigurationEnactment (nnce) to troubleshoot issues with nncp NodeNetworkState (nns) to view the node’s networking configurationNote Project kubernetes-nmstate has a distributed architecture to reduce kube-apiserver connectivity dependency, this means that every pod will configure the networking on the node that it’s running without much interaction with kube-apiserver. In case something goes wrong and the pod changing the node network cannot ping the default gateway, resolve DNS root servers or has lost the kube-apiserver connectivity it will rollback to the previous configuration to go back to a working state. Those errors can be checked by running kubectl get nnce. The command displays potential issues per node and nncp. The desired state fields follow the nmstate API described at their awesome doc Also for more details on kubernetes-nmstate there are guides covering reporting, configuration and troubleshooting. There are also nncp examples. Demo: mixing it all together, VM to VM communication between nodes: With the following recipe we will end up with a pair of virtual machines pair on two different nodes with one secondary NICs, eth1 at vlan 100. They will be connected to each other usingthe same bridge on nodes that also have the external secondary NIC eth1 connected. Demo environment setup: We are going to use a kubevirtci as Kubernetes ephemeral cluster provider. To start it up with two nodes and one secondary NIC and install NetworkManager >= 1. 22 (needed for kubernetes-nmstate) and dnsmasq follow these steps: git clone https://github. com/kubevirt/kubevirtcicd kubevirtci# Pin to version working with blog post steps in case# k8s-1. 19 provider disappear in the futuregit reset d5d8e3e376b4c3b45824fbfe320b4c5175b37171 --hardexport KUBEVIRT_PROVIDER=k8s-1. 19export KUBEVIRT_NUM_NODES=2export KUBEVIRT_NUM_SECONDARY_NICS=1make cluster-upexport KUBECONFIG=$(. /cluster-up/kubeconfig. sh)Installing components: To install KubeVirt we are going to use the operator kubevirt-hyper-converged-operator, this will install all the componentsneeded to have a functional KubeVirt with all the features including the ones we are going to use: multus, linux-bridge, kubemacpool and kubernetes-nmstate. curl https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/master/deploy/deploy. sh | bashkubectl wait hco -n kubevirt-hyperconverged kubevirt-hyperconverged --for condition=Available --timeout=500sNow we have a Kubernetes cluster with all the pieces to startup a VM with bridge attached to a secondary NIC. Creating the br0 on nodes with a port attached to secondary NIC eth1: First step is to create a L2 linux-bridge at nodes with one port on the secondary NIC eth1, this will beused later on by the bridge CNI. cat <<EOF | kubectl apply -f -apiVersion: nmstate. io/v1alpha1kind: NodeNetworkConfigurationPolicymetadata: name: br0-eth1spec: desiredState: interfaces: - name: br0 description: Linux bridge with eth1 as a port type: linux-bridge state: up bridge: options: stp: enabled: false port: - name: eth1EOFNow we wait for the bridge to be created checking nncp conditions: kubectl wait nncp br0-eth1 --for condition=Available --timeout 2mAfter the nncp becomes available, we can query the nncp resources in the clusterand see it listed with successful status. kubectl get nncpNAME STATUSbr0-eth1 SuccessfullyConfiguredWe can inspect the status of applying the policy to each node. For that there is the NodeNetworkConfigurationEnactment CR (nnce): kubectl get nnceNAME STATUSnode01. br0-eth1 SuccessfullyConfigurednode02. br0-eth1 SuccessfullyConfiguredNote In case of errors it is possible to retrieve the error dumped by nmstate runningkubectl get nnce -o yaml the status will contain the error. We can also inspect the network state on the nodes by retrieving the NodeNetworkState andchecking if the bridge br0 is up using jsonpath kubectl get nns node01 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'kubectl get nns node02 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'When inspecting the full currentState yaml we get the followinginterface configuration: kubectl get nns node01 -o yamlstatus: currentState: interfaces: - bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: eth1 stp-hairpin-mode: false stp-path-cost: 100 stp-priority: 32 description: Linux bridge with eth1 as a port ipv4: dhcp: false enabled: false ipv6: autoconf: false dhcp: false enabled: false mac-address: 52:55:00:D1:56:00 mtu: 1500 name: br0 state: up type: linux-bridgeWe can also check that the bridge-marker is working and check verify on nodes: kubectl get node node01 -o yamlThe following should appear stating that br0can be consumed on the node: status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kAt this point we have an L2 linux bridge ready and connected to NIC eth1. Configure network attachment with a L2 bridge and a vlan: In order to make the bridge a L2 bridge, we specify no IPAM (IP Address Management) since we arenot going to configure any ip address for the bridge. To configurebridge vlan-filtering we add the vlan we want to use to isolate our VMs: cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: br0-100-l2 annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-100-l2-config , plugins : [ { type : bridge , bridge : br0 , vlan : 100, ipam : {} }, { type : tuning } ] }EOFStart a pair of VMs on different nodes using the multus configuration to connect a secondary interfaces to br0: Now it’s time to startup the VMs running on different nodes so we can check external connectivity ofbr0. They will also have a secondary NIC eth1 to connect to the other VM running at different node, so they goover the br0 at nodes. The following picture illustrates the cluster: bridgecluster_kubevirtcikubevirtci clustercluster_node01node01cluster_vmavmacluster_node02node02cluster_vmbvmbnd_br1_kubevirtcibr1nd_br0_node01br0nd_eth1_node01eth1nd_br0_node01--nd_eth1_node01nd_eth1_vmaeth1nd_br0_node01--nd_eth1_vmand_eth1_node01--nd_br1_kubevirtcind_br0_node02br0nd_eth1_node02eth1nd_br0_node02--nd_eth1_node02nd_eth1_vmbeth1nd_br0_node02--nd_eth1_vmbnd_eth1_node02--nd_br1_kubevirtciFirst step is to install the virtctl command line tool to play with virtual machines: curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/v0. 33. 0/virtctl-v0. 33. 0-linux-amd64chmod +x virtctlsudo install virtctl /usr/local/binNow let’s create two VirtualMachines on each node. They will have one secondary NIC connected to br0 using the multus configuration for vlan 100. We will also activate kubemacpool to be sure that mac addresses are unique in the cluster and install the qemu-guest-agent so IP addresses from secondary NICs are reported to VM and we can inspect them later on. cat <<EOF | kubectl apply -f -apiVersion: v1kind: Namespacemetadata: name: default labels: mutatevirtualmachines. kubemacpool. io: allocate---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmaspec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: node01 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 1/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agent---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmbspec: runStrategy: Always template: spec: nodeSelector: kubernetes. io/hostname: node02 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 2/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agentEOFWait for the two VMs to be ready. Eventually you will see something like this: kubectl get vmiNAME AGE PHASE IP NODENAMEvma 2m4s Running 10. 244. 196. 142 node01vmb 2m4s Running 10. 244. 140. 86 node02We can check that they have one secondary NIC withoutaddress assigned: kubectl get vmi -o yaml## vma interfaces: - interfaceName: eth0 ipAddress: 10. 244. 196. 144 ipAddresses: - 10. 244. 196. 144 - fd10:244::c48f mac: 02:4a:be:00:00:0a name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 1/24 ipAddresses: - 10. 200. 0. 1/24 - fe80::4a:beff:fe00:b/64 mac: 02:4a:be:00:00:0b name: br0-100## vmb interfaces: - interfaceName: eth0 ipAddress: 10. 244. 140. 84 ipAddresses: - 10. 244. 140. 84 - fd10:244::8c53 mac: 02:4a:be:00:00:0e name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 2/24 ipAddresses: - 10. 200. 0. 2/24 - fe80::4a:beff:fe00:f/64 mac: 02:4a:be:00:00:0f name: br0-100Let’s finish this section by verifying connectivity between vma and vmb using ping. Open the console of vma virtual machine and use ping command with destination IP address 10. 200. 0. 2, which is the address assigned to the secondary interface of vmb: Note The user and password for this VMs is fedora, it was configured at cloudinit userData virtctl console vmaping 10. 200. 0. 2 -c 3PING 10. 200. 0. 2 (10. 200. 0. 2): 56 data bytes64 bytes from 10. 200. 0. 2: seq=0 ttl=50 time=357. 040 ms64 bytes from 10. 200. 0. 2: seq=1 ttl=50 time=379. 742 ms64 bytes from 10. 200. 0. 2: seq=2 ttl=50 time=404. 066 ms--- 10. 200. 0. 2 ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 357. 040/380. 282/404. 066 msConclusion: In this blog post we used network components from KubeVirt project to connect two VMs on different nodesthrough a linux bridge connected to a secondary NIC. This illustrates how VM traffic can be directed to a specific NICon a node using a secondary NIC on a VM. " }, { "id": 55, "url": "/2020/changelog-v0.34.0.html", @@ -741,7 +741,7 @@

    "title": "Common-templates", "author" : "Karel Simon", "tags" : "kubevirt, Kubernetes, virtual machine, VM, common-templates", - "body": "What is a virtual machine template?: The KubeVirt project provides a set of templates https://github. com/kubevirt/common-template to create VMS to handle common usage scenarios. These templates provide a combination of some key factors that could be further customized and processed to have a Virtual Machine object. With common templates you can easily start in a few minutes many VMS with predefined hardware resources (e. g. number of CPUs, requested memory, etc. ). Beware common templates work only on OpenShift. Kubernetes doesn’t have support for templates. What does a VM template cover?: The key factors which define a template are Guest Operating System (OS) This allows to ensure that the emulated hardware is compatible with the guest OS. Furthermore, it allows to maximize the stability of the VM, and allows performance optimizations. Currently common templates support RHEL 6, 7, 8, Centos 6, 7, 8, Fedora 31 and newer, Windows 10, Windows server 2008, 2012 R2, 2016, 2019. The Ansible playbook generate-templates. yaml describes all combinations of templates that should be generated. Workload type of most virtual machines should be server or desktop to have maximum flexibility; the highperformance workload trades some of this flexibility (ioThreadsPolicy is set to shared) to provide better performances (e. g. IO threads). Size (flavor) Defines the amount of resources (CPU, memory) to allocate to the VM. There are 4 sizes: tiny (1 core, 1 Gi memory), small (1 core, 2 Gi memory), medium (1 core, 4 Gi memory), large (2 cores, 8 Gi memory). If these predefined sizes don’t suit you, you can create a new template based on common templates via UI (choose Workloads in the left panel » press Virtualization » press Virtual Machine Templates » press Create Virtual Machine Template blue button) or CLI (update yaml template and create new template). Accessing the virtual machine templates: If you installed KubeVirt using a supported method, you should find the common templates preinstalled in the cluster. If you want to upgrade the templates, or install them from scratch, you can use one of the supported releasesThere are two ways to install and configure templates: Via CLI: To install the templates: $ export VERSION= v0. 11. 2 $ oc create -f https://github. com/kubevirt/common-templates/releases/download/$VERSION/common-templates-$VERSION. yaml To create VM from template: $ oc process rhel8-server-tiny PVCNAME=mydisk NAME=rheltinyvm | oc apply -f - To start VM from created objectThe created object is now a regular VirtualMachine object and from now it can be controlled by accessing Kubernetes API resources. The preferred way to do this is to use virtctl tool. $ virtctl start rheltinyvm An alternative way to start the VM is with the oc patch command. Example: $ oc patch virtualmachine rheltinyvm --type merge -p '{ spec :{ running :true}}' As soon as VM starts, openshift creates a new type of object - VirtualMachineInstance. It has a similar name to VirtualMachine. Via UI: The Kubevirt project has an official plugin in OpenShift Cluster Console Web UI. This UI supports the creation of VMS using templates and template features - flavors and workload profiles. To install the templates: Install OpenShift virtualization operator from Operators > OperatorHub. The operator-based deployment takes care of installing various components, including the common templates. To create VM from template: To create a VM from a template, choose Workloads in the left panel » press Virtualization » press Create Virtual Machine blue button » choose New with Wizard. Next, you have to see Create Virtual Machine window This wizard leads you through the basic setup of vm (like guest operating system, workload, flavor, …). After vm is created you can start requested vm. Note after the generation step (UI and CLI), VM objects and template objects have no relationship with each other besides the vm. kubevirt. io/template: rhel8-server-tiny-v0. 10. 0 label. This means that changes in templates do not automatically affect VMS, or vice versa. " + "body": "What is a virtual machine template?: The KubeVirt project provides a set of templates https://github. com/kubevirt/common-template to create VMS to handle common usage scenarios. These templates provide a combination of some key factors that could be further customized and processed to have a Virtual Machine object. With common templates you can easily start in a few minutes many VMS with predefined hardware resources (e. g. number of CPUs, requested memory, etc. ). Beware common templates work only on OpenShift. Kubernetes doesn’t have support for templates. What does a VM template cover?: The key factors which define a template are Guest Operating System (OS) This allows to ensure that the emulated hardware is compatible with the guest OS. Furthermore, it allows to maximize the stability of the VM, and allows performance optimizations. Currently common templates support RHEL 6, 7, 8, Centos 6, 7, 8, Fedora 31 and newer, Windows 10, Windows server 2008, 2012 R2, 2016, 2019. The Ansible playbook generate-templates. yaml describes all combinations of templates that should be generated. Workload type of most virtual machines should be server or desktop to have maximum flexibility; the highperformance workload trades some of this flexibility (ioThreadsPolicy is set to shared) to provide better performances (e. g. IO threads). Size (flavor) Defines the amount of resources (CPU, memory) to allocate to the VM. There are 4 sizes: tiny (1 core, 1 Gi memory), small (1 core, 2 Gi memory), medium (1 core, 4 Gi memory), large (2 cores, 8 Gi memory). If these predefined sizes don’t suit you, you can create a new template based on common templates via UI (choose Workloads in the left panel » press Virtualization » press Virtual Machine Templates » press Create Virtual Machine Template blue button) or CLI (update yaml template and create new template). Accessing the virtual machine templates: If you installed KubeVirt using a supported method, you should find the common templates preinstalled in the cluster. If you want to upgrade the templates, or install them from scratch, you can use one of the supported releasesThere are two ways to install and configure templates: Via CLI: To install the templates: $ export VERSION= v0. 11. 2 $ oc create -f https://github. com/kubevirt/common-templates/releases/download/$VERSION/common-templates-$VERSION. yaml To create VM from template: $ oc process rhel8-server-tiny PVCNAME=mydisk NAME=rheltinyvm | oc apply -f - To start VM from created objectThe created object is now a regular VirtualMachine object and from now it can be controlled by accessing Kubernetes API resources. The preferred way to do this is to use virtctl tool. $ virtctl start rheltinyvm An alternative way to start the VM is with the oc patch command. Example: $ oc patch virtualmachine rheltinyvm --type merge -p '{ spec :{ runStrategy : Always }}' As soon as VM starts, openshift creates a new type of object - VirtualMachineInstance. It has a similar name to VirtualMachine. Via UI: The Kubevirt project has an official plugin in OpenShift Cluster Console Web UI. This UI supports the creation of VMS using templates and template features - flavors and workload profiles. To install the templates: Install OpenShift virtualization operator from Operators > OperatorHub. The operator-based deployment takes care of installing various components, including the common templates. To create VM from template: To create a VM from a template, choose Workloads in the left panel » press Virtualization » press Create Virtual Machine blue button » choose New with Wizard. Next, you have to see Create Virtual Machine window This wizard leads you through the basic setup of vm (like guest operating system, workload, flavor, …). After vm is created you can start requested vm. Note after the generation step (UI and CLI), VM objects and template objects have no relationship with each other besides the vm. kubevirt. io/template: rhel8-server-tiny-v0. 10. 0 label. This means that changes in templates do not automatically affect VMS, or vice versa. " }, { "id": 62, "url": "/2020/win_workload_in_k8s.html", @@ -769,7 +769,7 @@

    "title": "KubeVirt VM Image Usage Patterns", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, images, storage", - "body": "Building a VM Image Repository: You know what I hear a lot from new KubeVirt users? “How do I manage VM images with KubeVirt? There’s a million options and I have no idea where to start. ” And I agree. It’s not obvious. There are a million ways to use and manipulate VM images with KubeVirt. That’s by design. KubeVirt is meant to be as flexible as possible, but in the process I think we dropped the ball on creating some well defined workflows people can use as a starting point. So, that’s what I’m going to attempt to do. I’ll show you how to make your images accessible in the cluster. I’ll show you how to make a custom VM image repository for use within the cluster. And I’ll show you how to use this at scale using the same patterns you may have used in AWS or GCP. The pattern we’ll use here is… Import a base VM image into the cluster as an PVC Use KubeVirt to create a new immutable custom image with application assets Scale out as many VMIs as we’d like using the pre-provisioned immutable custom image. Remember, this isn’t “the definitive” way of managing VM images in KubeVirt. This is just an example workflow to help people get started. Importing a Base Image: Let’s start with importing a base image into a PVC. For our purposes in this workflow, the base image is meant to be immutable. No VM will use this image directly, instead VMs spawn with their own unique copy of this base image. Think of this just like you would containers. A container image is immutable, and a running container instance is using a copy of an image instead of the image itself. Step 0. Install KubeVirt with CDI: I’m not covering this. Use our documentation linked to below. Understand that CDI (containerized data importer) is the tool we’ll be using to help populate and manage PVCs. Installing KubeVirtInstalling CDI Step 1. Create a namespace for our immutable VM images: We’ll give users the ability to clone VM images living on PVCs from this namespace to their own namespace, but not directly create VMIs within this namespace. kubectl create namespace vm-imagesStep 2. Import your image to a PVC in the image namespace: Below are a few options for importing. For each example, I’m using the Fedora Cloud x86_64 qcow2 image that can be downloaded here If you try these examples yourself, you’ll need to download the current Fedora-Cloud-Base qcow2 image file in your working directory. Example: Import a local VM from your desktop environment using virtctl If you don’t have ingress setup for the cdi-uploadproxy service endpoint (which you don’t if you’re reading this) we can set up a local port forward using kubectl. That gives a route into the cluster to upload the image. Leave the command below executing to open the port. kubectl port-forward -n cdi service/cdi-uploadproxy 18443:443In a separate terminal upload the image over the port forward connection using the virtctl tool. Note that the size of the PVC must be the size of what the qcow image will expand to when converted to a raw image. In this case I chose 5 gigabytes as the PVC size. virtctl image-upload dv fedora-cloud-base --namespace vm-images --size=5Gi --image-path Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 --uploadproxy-url=https://127. 0. 0. 1:18443 --insecureOnce that completes, you’ll have a PVC in the vm-images namespace that contains the Fedora Cloud image. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import using a container registry If the image’s footprint is small like our Fedora Cloud Base qcow image, then it probably makes sense to use a container image registry to import our image from a container image to a PVC. In the example below, I start by building a container image with the Fedora Cloud Base qcow VM image in it, and push that container image to my container registry. cat << END > DockerfileFROM scratchADD Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 /disk/ENDdocker build -t quay. io/dvossel/fedora:cloud-base . docker push quay. io/dvossel/fedora:cloud-baseNext a CDI DataVolume is used to import the VM image into a new PVC from the container image you just uploaded to your container registry. Posting the DataVolume manifest below will result in a new 5 gigabyte PVC being created and the VM image being placed on that PVC in a way KubeVirt can consume it. cat << END > fedora-cloud-base-datavolume. yamlapiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: registry: url: docker://quay. io/dvossel/fedora:cloud-base pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiENDkubectl create -f fedora-cloud-base-datavolume. yamlYou can observe the CDI complete the import by watching the DataVolume object. kubectl describe datavolume fedora-cloud-base -n vm-images. . . Status: Phase: Succeeded Progress: 100. 0%Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ImportScheduled 2m49s datavolume-controller Import into fedora-cloud-base scheduled Normal ImportInProgress 2m46s datavolume-controller Import into fedora-cloud-base in progress Normal Synced 40s (x11 over 2m51s) datavolume-controller DataVolume synced successfully Normal ImportSucceeded 40s datavolume-controller Successfully imported into PVC fedora-cloud-baseOnce the import is complete, you’ll see the image available as a PVC in your vm-images namespace. The PVC will have the same name given to the DataVolume. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import an image from an http or s3 endpoint While I’m not going to provide a detailed example here, another option for importing VM images into a PVC is to host the image on an http server (or as an s3 object) and then use a DataVolume to import the VM image into the PVC from a URL. Replace the url in this example with one hosting the qcow2 image. More information about this import method can be found here. kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: http: url: http://your-web-server-here/images/Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiProvisioning New Custom VM Image: The base image itself isn’t that useful to us. Typically what we really want is an immutable VM image preloaded with all our application related assets. This way when the VM boots up, it already has everything it needs pre-provisioned. The pattern we’ll use here is to provision the VM image once, and then use clones of the pre-provisioned VM image as many times as we’d like. For this example, I want a new immutable VM image preloaded with an nginx webserver. We can actually describe this entire process of creating this new VM image using the single VM manifest below. Note that I’m starting the VM inside the vm-images namespace. This is because I want the resulting VM image’s cloned PVC to remain in our vm-images repository namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx-provisioner name: nginx-provisioner namespace: vm-imagesspec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/vm: nginx-provisioner spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: fedora-nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | #!/bin/sh yum install -y nginx systemctl enable nginx # removing instances ensures cloud init will execute again after reboot rm -rf /var/lib/cloud/instances shutdown now name: cloudinitdisk dataVolumeTemplates: - metadata: name: fedora-nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-cloud-baseThere are a few key takeaways from this manifest worth discussing. Usage of runStrategy: “RerunOnFailure”. This tells KubeVirt to treat the VM’s execution similar to a Kubernetes Job. We want the VM to continue retrying until the VM guest shuts itself down gracefully. Usage of the cloudInitNoCloud volume. This volume allows us to inject a script into the VM’s startup procedure. In our case, we want this script to install nginx, configure nginx to launch on startup, and then immediately shutdown the guest gracefully once that is complete. Usage of the dataVolumeTemplates section. This allows us to define a new PVC which is a clone of our fedora-cloud-base base image. The resulting VM image attached to our VM will be a new image pre-populated with nginx. After posting the VM manifest to the cluster, wait for the corresponding VMI to reach the Succeeded phase. kubectl get vmi -n vm-imagesNAME AGE PHASE IP NODENAMEnginx-provisioner 2m26s Succeeded 10. 244. 0. 22 node01This tells us the VM successfully executed the cloud-init script which installed nginx and shut down the guest gracefully. A VMI that never shuts down or repeatedly fails means something is wrong with the provisioning. All that’s left now is to delete the VM and leave the resulting PVC behind as our immutable artifact. We do this by deleting the VM using the –cascade=false option. This tells Kubernetes to delete the VM, but leave behind anything owned by the VM. In this case we’ll be leaving behind the PVC that has nginx provisioned on it. kubectl delete vm nginx-provisioner -n vm-images --cascade=falseAfter deleting the VM, you can see the nginx provisioned PVC in your vm-images namespace. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sfedora-nginx Bound local-pv-8dla23ds 5Gi RWO local 60sUnderstanding the VM Image Repository: At this point we have a namespace, vm-images, that contains PVCs with our VM images on them. Those PVCs represent VM images in the same way AWS’s AMIs represent VM images and this vm-images namespace is our VM image repository. Using CDI’s icross namespace cloning feature, VM’s can now be launched across multiple namespaces throughout the entire cluster using the PVCs in this “repository”. Note that non-admin users need a special RBAC role to allow for this cross namespace PVC cloning. Any non-admin user who needs the ability to access the vm-images namespace for PVC cloning will need the RBAC permissions outlined here. Below is an example of the RBAC necessary to enable cross namespace cloning from the vm-images namespace to the default namespace using the default service account. apiVersion: rbac. authorization. k8s. io/v1kind: ClusterRolemetadata: name: cdi-clonerrules:- apiGroups: [ cdi. kubevirt. io ] resources: [ datavolumes/source ] verbs: [ create ]---apiVersion: rbac. authorization. k8s. io/v1kind: RoleBindingmetadata: name: default-cdi-cloner namespace: vm-imagessubjects:- kind: ServiceAccount name: default namespace: defaultroleRef: kind: ClusterRole name: cdi-cloner apiGroup: rbac. authorization. k8s. ioHorizontally Scaling VMs Using Custom Image: Now that we have our immutable custom VM image, we can create as many VMs as we want using that custom image. Example: Scale out VMI instances using the custom VM image: Clone the custom VM image from the vm-images namespace into the namespace the VMI instances will be running in as a ReadOnlyMany PVC. This will allow concurrent access to a single PVC. apiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: nginx-rom namespace: defaultspec: source: pvc: namespace: vm-images name: fedora-nginx pvc: accessModes: - ReadOnlyMany resources: requests: storage: 5GiNext, create a VirtualMachineInstanceReplicaSet that references the nginx-rom PVC as an ephemeral volume. With an ephemeral volume, KubeVirt will mount the PVC read only, and use a cow (copy on write) ephemeral volume on local storage to back each individual VMI. This ephemeral data’s life cycle is limited to the life cycle of each VMI. Here’s an example manifest of a VirtualMachineInstanceReplicaSet starting 5 instances of our nginx server in separate VMIs. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: labels: kubevirt. io/vmReplicaSet: nginx name: nginxspec: replicas: 5 template: metadata: labels: kubevirt. io/vmReplicaSet: nginx spec: domain: devices: disks: - disk: bus: virtio name: nginx-image - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - ephemeral: name: nginx-image persistentVolumeClaim: claimName: nginx-rom - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdiskExample: Launching a Single “Pet” VM from Custom Image: In the manifest below, we’re starting a new VM with a PVC cloned from our pre-provisioned VM image that contains the nginx server. When the VM boots up, a new PVC will be created in the VM’s namespace that is a clone of the PVC referenced in our vm-images namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx name: nginxspec: running: true template: metadata: labels: kubevirt. io/vm: nginx spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdisk dataVolumeTemplates: - metadata: name: nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-nginxOther Custom Creation Image Tools: In my example I imported a VM base image into the cluster and used KubeVirt to provision a custom image with a technique that used cloud-init. This may or may not make sense for your use case. It’s possible you need to pre-provision the VM image before importing into the cluster at all. If that’s the case, I suggest looking into two tools. Packer. io using the qemu builder. This allows you to automate building custom images on your local machine using configuration files that describe all the build steps. I like this tool because it closely matches the Kubernetes “declarative” approach. Virt-customize is a cli tool that allows you to customize local VM images by injecting/modifying files on disk and installing packages. Virt-install is a cli tool that allows you to automate a VM install as if you were installing it from a cdrom. You’ll want to look into using a kickstart file to fully automate the process. The resulting VM image artifact created from any of these tools can then be imported into the cluster in the same way we imported the base image earlier in this document. " + "body": "Building a VM Image Repository: You know what I hear a lot from new KubeVirt users? “How do I manage VM images with KubeVirt? There’s a million options and I have no idea where to start. ” And I agree. It’s not obvious. There are a million ways to use and manipulate VM images with KubeVirt. That’s by design. KubeVirt is meant to be as flexible as possible, but in the process I think we dropped the ball on creating some well defined workflows people can use as a starting point. So, that’s what I’m going to attempt to do. I’ll show you how to make your images accessible in the cluster. I’ll show you how to make a custom VM image repository for use within the cluster. And I’ll show you how to use this at scale using the same patterns you may have used in AWS or GCP. The pattern we’ll use here is… Import a base VM image into the cluster as an PVC Use KubeVirt to create a new immutable custom image with application assets Scale out as many VMIs as we’d like using the pre-provisioned immutable custom image. Remember, this isn’t “the definitive” way of managing VM images in KubeVirt. This is just an example workflow to help people get started. Importing a Base Image: Let’s start with importing a base image into a PVC. For our purposes in this workflow, the base image is meant to be immutable. No VM will use this image directly, instead VMs spawn with their own unique copy of this base image. Think of this just like you would containers. A container image is immutable, and a running container instance is using a copy of an image instead of the image itself. Step 0. Install KubeVirt with CDI: I’m not covering this. Use our documentation linked to below. Understand that CDI (containerized data importer) is the tool we’ll be using to help populate and manage PVCs. Installing KubeVirtInstalling CDI Step 1. Create a namespace for our immutable VM images: We’ll give users the ability to clone VM images living on PVCs from this namespace to their own namespace, but not directly create VMIs within this namespace. kubectl create namespace vm-imagesStep 2. Import your image to a PVC in the image namespace: Below are a few options for importing. For each example, I’m using the Fedora Cloud x86_64 qcow2 image that can be downloaded here If you try these examples yourself, you’ll need to download the current Fedora-Cloud-Base qcow2 image file in your working directory. Example: Import a local VM from your desktop environment using virtctl If you don’t have ingress setup for the cdi-uploadproxy service endpoint (which you don’t if you’re reading this) we can set up a local port forward using kubectl. That gives a route into the cluster to upload the image. Leave the command below executing to open the port. kubectl port-forward -n cdi service/cdi-uploadproxy 18443:443In a separate terminal upload the image over the port forward connection using the virtctl tool. Note that the size of the PVC must be the size of what the qcow image will expand to when converted to a raw image. In this case I chose 5 gigabytes as the PVC size. virtctl image-upload dv fedora-cloud-base --namespace vm-images --size=5Gi --image-path Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 --uploadproxy-url=https://127. 0. 0. 1:18443 --insecureOnce that completes, you’ll have a PVC in the vm-images namespace that contains the Fedora Cloud image. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import using a container registry If the image’s footprint is small like our Fedora Cloud Base qcow image, then it probably makes sense to use a container image registry to import our image from a container image to a PVC. In the example below, I start by building a container image with the Fedora Cloud Base qcow VM image in it, and push that container image to my container registry. cat << END > DockerfileFROM scratchADD Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 /disk/ENDdocker build -t quay. io/dvossel/fedora:cloud-base . docker push quay. io/dvossel/fedora:cloud-baseNext a CDI DataVolume is used to import the VM image into a new PVC from the container image you just uploaded to your container registry. Posting the DataVolume manifest below will result in a new 5 gigabyte PVC being created and the VM image being placed on that PVC in a way KubeVirt can consume it. cat << END > fedora-cloud-base-datavolume. yamlapiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: registry: url: docker://quay. io/dvossel/fedora:cloud-base pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiENDkubectl create -f fedora-cloud-base-datavolume. yamlYou can observe the CDI complete the import by watching the DataVolume object. kubectl describe datavolume fedora-cloud-base -n vm-images. . . Status: Phase: Succeeded Progress: 100. 0%Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ImportScheduled 2m49s datavolume-controller Import into fedora-cloud-base scheduled Normal ImportInProgress 2m46s datavolume-controller Import into fedora-cloud-base in progress Normal Synced 40s (x11 over 2m51s) datavolume-controller DataVolume synced successfully Normal ImportSucceeded 40s datavolume-controller Successfully imported into PVC fedora-cloud-baseOnce the import is complete, you’ll see the image available as a PVC in your vm-images namespace. The PVC will have the same name given to the DataVolume. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import an image from an http or s3 endpoint While I’m not going to provide a detailed example here, another option for importing VM images into a PVC is to host the image on an http server (or as an s3 object) and then use a DataVolume to import the VM image into the PVC from a URL. Replace the url in this example with one hosting the qcow2 image. More information about this import method can be found here. kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: http: url: http://your-web-server-here/images/Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiProvisioning New Custom VM Image: The base image itself isn’t that useful to us. Typically what we really want is an immutable VM image preloaded with all our application related assets. This way when the VM boots up, it already has everything it needs pre-provisioned. The pattern we’ll use here is to provision the VM image once, and then use clones of the pre-provisioned VM image as many times as we’d like. For this example, I want a new immutable VM image preloaded with an nginx webserver. We can actually describe this entire process of creating this new VM image using the single VM manifest below. Note that I’m starting the VM inside the vm-images namespace. This is because I want the resulting VM image’s cloned PVC to remain in our vm-images repository namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx-provisioner name: nginx-provisioner namespace: vm-imagesspec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/vm: nginx-provisioner spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: fedora-nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | #!/bin/sh yum install -y nginx systemctl enable nginx # removing instances ensures cloud init will execute again after reboot rm -rf /var/lib/cloud/instances shutdown now name: cloudinitdisk dataVolumeTemplates: - metadata: name: fedora-nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-cloud-baseThere are a few key takeaways from this manifest worth discussing. Usage of runStrategy: “RerunOnFailure”. This tells KubeVirt to treat the VM’s execution similar to a Kubernetes Job. We want the VM to continue retrying until the VM guest shuts itself down gracefully. Usage of the cloudInitNoCloud volume. This volume allows us to inject a script into the VM’s startup procedure. In our case, we want this script to install nginx, configure nginx to launch on startup, and then immediately shutdown the guest gracefully once that is complete. Usage of the dataVolumeTemplates section. This allows us to define a new PVC which is a clone of our fedora-cloud-base base image. The resulting VM image attached to our VM will be a new image pre-populated with nginx. After posting the VM manifest to the cluster, wait for the corresponding VMI to reach the Succeeded phase. kubectl get vmi -n vm-imagesNAME AGE PHASE IP NODENAMEnginx-provisioner 2m26s Succeeded 10. 244. 0. 22 node01This tells us the VM successfully executed the cloud-init script which installed nginx and shut down the guest gracefully. A VMI that never shuts down or repeatedly fails means something is wrong with the provisioning. All that’s left now is to delete the VM and leave the resulting PVC behind as our immutable artifact. We do this by deleting the VM using the –cascade=false option. This tells Kubernetes to delete the VM, but leave behind anything owned by the VM. In this case we’ll be leaving behind the PVC that has nginx provisioned on it. kubectl delete vm nginx-provisioner -n vm-images --cascade=falseAfter deleting the VM, you can see the nginx provisioned PVC in your vm-images namespace. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sfedora-nginx Bound local-pv-8dla23ds 5Gi RWO local 60sUnderstanding the VM Image Repository: At this point we have a namespace, vm-images, that contains PVCs with our VM images on them. Those PVCs represent VM images in the same way AWS’s AMIs represent VM images and this vm-images namespace is our VM image repository. Using CDI’s icross namespace cloning feature, VM’s can now be launched across multiple namespaces throughout the entire cluster using the PVCs in this “repository”. Note that non-admin users need a special RBAC role to allow for this cross namespace PVC cloning. Any non-admin user who needs the ability to access the vm-images namespace for PVC cloning will need the RBAC permissions outlined here. Below is an example of the RBAC necessary to enable cross namespace cloning from the vm-images namespace to the default namespace using the default service account. apiVersion: rbac. authorization. k8s. io/v1kind: ClusterRolemetadata: name: cdi-clonerrules:- apiGroups: [ cdi. kubevirt. io ] resources: [ datavolumes/source ] verbs: [ create ]---apiVersion: rbac. authorization. k8s. io/v1kind: RoleBindingmetadata: name: default-cdi-cloner namespace: vm-imagessubjects:- kind: ServiceAccount name: default namespace: defaultroleRef: kind: ClusterRole name: cdi-cloner apiGroup: rbac. authorization. k8s. ioHorizontally Scaling VMs Using Custom Image: Now that we have our immutable custom VM image, we can create as many VMs as we want using that custom image. Example: Scale out VMI instances using the custom VM image: Clone the custom VM image from the vm-images namespace into the namespace the VMI instances will be running in as a ReadOnlyMany PVC. This will allow concurrent access to a single PVC. apiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: nginx-rom namespace: defaultspec: source: pvc: namespace: vm-images name: fedora-nginx pvc: accessModes: - ReadOnlyMany resources: requests: storage: 5GiNext, create a VirtualMachineInstanceReplicaSet that references the nginx-rom PVC as an ephemeral volume. With an ephemeral volume, KubeVirt will mount the PVC read only, and use a cow (copy on write) ephemeral volume on local storage to back each individual VMI. This ephemeral data’s life cycle is limited to the life cycle of each VMI. Here’s an example manifest of a VirtualMachineInstanceReplicaSet starting 5 instances of our nginx server in separate VMIs. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: labels: kubevirt. io/vmReplicaSet: nginx name: nginxspec: replicas: 5 template: metadata: labels: kubevirt. io/vmReplicaSet: nginx spec: domain: devices: disks: - disk: bus: virtio name: nginx-image - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - ephemeral: name: nginx-image persistentVolumeClaim: claimName: nginx-rom - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdiskExample: Launching a Single “Pet” VM from Custom Image: In the manifest below, we’re starting a new VM with a PVC cloned from our pre-provisioned VM image that contains the nginx server. When the VM boots up, a new PVC will be created in the VM’s namespace that is a clone of the PVC referenced in our vm-images namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx name: nginxspec: runStrategy: Always template: metadata: labels: kubevirt. io/vm: nginx spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdisk dataVolumeTemplates: - metadata: name: nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-nginxOther Custom Creation Image Tools: In my example I imported a VM base image into the cluster and used KubeVirt to provision a custom image with a technique that used cloud-init. This may or may not make sense for your use case. It’s possible you need to pre-provision the VM image before importing into the cluster at all. If that’s the case, I suggest looking into two tools. Packer. io using the qemu builder. This allows you to automate building custom images on your local machine using configuration files that describe all the build steps. I like this tool because it closely matches the Kubernetes “declarative” approach. Virt-customize is a cli tool that allows you to customize local VM images by injecting/modifying files on disk and installing packages. Virt-install is a cli tool that allows you to automate a VM install as if you were installing it from a cdrom. You’ll want to look into using a kickstart file to fully automate the process. The resulting VM image artifact created from any of these tools can then be imported into the cluster in the same way we imported the base image earlier in this document. " }, { "id": 66, "url": "/2020/changelog-v0.29.0.html", @@ -797,7 +797,7 @@

    "title": "KubeVirt Architecture Fundamentals", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, design, architecture", - "body": "Placing our Bets: Back in 2017 the KubeVirt architecture team got together and placed their bets on a set of core design principles that became the foundation of what KubeVirt is today. At the time, our decisions broke convention. We chose to take some calculated risks with the understanding that those risks had a real chance of not playing out in our favor. Luckily, time has proven our bets were well placed. Since those early discussions back in 2017, KubeVirt has grown from a theoretical prototype into a project deployed in production environments with a thriving open source community. While KubeVirt has grown in maturity and sophistication throughout the past few years, the initial set of guidelines established in those early discussions still govern the project’s architecture today. Those guidelines can be summarized nearly entirely by the following two key decisions. Virtual machines run in Pods using the existing container runtimes. This decision came at a time when other Kubernetes virtualization efforts were creating their own virtualization specific CRI runtimes. We took a bet on our ability to successfully launch virtual machines using existing and future container runtimes within an unadulterated Pod environment. Virtual machines are managed using a custom “Kubernetes like” declarative API. When this decision was made, imperative APIs were the defacto standard for how other platforms managed virtual machines. However, we knew in order to succeed in our mission to deliver a truly cloud-native API managed using existing Kubernetes tooling (like kubectl), we had to adhere fully to the declarative workflow. We took a bet that the lackluster Kubernetes Third Party Resource support (now known as CRDs) would eventually provide the ability to create custom declarative APIs as first class citizens in the cluster. Let’s dive into these two points a bit and take a look at how these two key decisions permeated throughout our entire design. Virtual Machines as Pods: We often pitch KubeVirt by saying something like “KubeVirt allows you to run virtual machines side by side with your container workloads”. However, the reality is we’re delivering virtual machines as container workloads. So as far as Kubernetes is concerned, there are no virtual machines, just pods and containers. Fundamentally, KubeVirt virtual machines just look like any other containerized application to the rest of the cluster. It’s our KubeVirt API and control plane that make these containerized virtual machines behave like you’d expect from using other virtual machine management platforms. The payoff from running virtual machines within a Kubernetes Pod has been huge for us. There’s an entire ecosystem that continues to grow around how to provide pods with access to networks, storage, host devices, cpu, memory, and more. This means every time a problem or feature is added to pods, it’s yet another tool we can use for virtual machines. Here are a few examples of how pod features meet the needs of virtual machines as well. Storage: Virtual machines need persistent disks. Users should be able to stop a VM, start a VM, and have the data persist. There’s a Kubernetes storage abstraction called a PVC (persistent volume claim) that allows persistent storage to be attached to a pod. This means by placing the virtual machine in a pod, we can use the existing PVC mechanisms of delivering persistent storage to deliver our virtual machine disks. Network: Virtual machines need access to cluster networking. Pods are provided network interfaces that tie directly into the pod network via CNI. We can give a virtual machine running in a pod access to the pod network using the default CNI allocated network interfaces already present in the pod’s environment. CPU/Memory: Users need the ability to assign cpu and memory resources to Virtual machines. We can assign cpu and memory to pods using the resource requests/limits on the pod spec. This means through the use of pod resource requests/limits we are able to assign resources directly to virtual machines as well. This list goes on and on. As problems are solved for pods, KubeVirt leverages the solution and translates it to the virtual machine equivalent. The Declarative KubeVirt Virtualization API: While a KubeVirt virtual machine runs within a pod, that doesn’t change the fact that people working with virtual machines have a different set of expectations for how virtual machines should work compared to how pods are managed. Here’s the conflict. Pods are mortal workloads. A pod is declared by posting it’s manifest to the cluster, the pod runs once to completion, and that’s it. It’s done. Virtual machines are immortal workloads. A virtual machine doesn’t just run once to completion. Virtual machines have state. They can be started, stopped, and restarted any number of times. Virtual machines have concepts like live migration as well. Furthermore if the node a virtual machine is running on dies, the expectation is for that exact same virtual machine to resurrect on another node maintaining its state. So, pods run once and virtual machines live forever. How do we reconcile the two? Our solution came from taking a play directly out of the Kubernetes playbook. The Kubernetes core apis have this concept of layering objects on top of one another through the use of workload controllers. For example, the Kubernetes ReplicaSet is a workload controller layered on top of pods. The ReplicaSet controller manages ensuring that there are always ‘x’ number of pod replicas running within the cluster. If a ReplicaSet object declares that 5 pod replicas should be running, but a node dies bringing that total to 4, then the ReplicaSet workload controller manages spinning up a 5th pod in order to meet the declared replica count. The workload controller is always reconciling on the ReplicaSet objects desired state. Using this established Kubernetes pattern of layering objects on top of one another, we came up with our own virtualization specific API and corresponding workload controller called a “VirtualMachine” (big surprise there on the name, right?). Users declare a VirtualMachine object just like they would a pod by posting the VirtualMachine object’s manifest to the cluster. The big difference here that deviates from how pods are managed is that we allow VirtualMachine objects to be declared to exist in different states. For example, you can declare you want to “start” a virtual machine by setting “running: true” on the VirtualMachine object’s spec. Likewise you can declare you want to “stop” a virtual machine by setting “running: false” on the VirtualMachine object’s spec. Behind the scenes, setting the “running” field to true or false results in the workload controller creating or deleting a pod for the virtual machine to live in. In the end, we essentially created the concept of an immortal VirtualMachine by laying our own custom API on top of mortal pods. Our API and controller knows how to resurrect a “stopped” VirtualMachine by constructing a pod with all the right network, storage volumes, cpu, and memory attached to in order to accurately bring the VirtualMachine back to life with the exact same state it stopped with. " + "body": "Placing our Bets: Back in 2017 the KubeVirt architecture team got together and placed their bets on a set of core design principles that became the foundation of what KubeVirt is today. At the time, our decisions broke convention. We chose to take some calculated risks with the understanding that those risks had a real chance of not playing out in our favor. Luckily, time has proven our bets were well placed. Since those early discussions back in 2017, KubeVirt has grown from a theoretical prototype into a project deployed in production environments with a thriving open source community. While KubeVirt has grown in maturity and sophistication throughout the past few years, the initial set of guidelines established in those early discussions still govern the project’s architecture today. Those guidelines can be summarized nearly entirely by the following two key decisions. Virtual machines run in Pods using the existing container runtimes. This decision came at a time when other Kubernetes virtualization efforts were creating their own virtualization specific CRI runtimes. We took a bet on our ability to successfully launch virtual machines using existing and future container runtimes within an unadulterated Pod environment. Virtual machines are managed using a custom “Kubernetes like” declarative API. When this decision was made, imperative APIs were the defacto standard for how other platforms managed virtual machines. However, we knew in order to succeed in our mission to deliver a truly cloud-native API managed using existing Kubernetes tooling (like kubectl), we had to adhere fully to the declarative workflow. We took a bet that the lackluster Kubernetes Third Party Resource support (now known as CRDs) would eventually provide the ability to create custom declarative APIs as first class citizens in the cluster. Let’s dive into these two points a bit and take a look at how these two key decisions permeated throughout our entire design. Virtual Machines as Pods: We often pitch KubeVirt by saying something like “KubeVirt allows you to run virtual machines side by side with your container workloads”. However, the reality is we’re delivering virtual machines as container workloads. So as far as Kubernetes is concerned, there are no virtual machines, just pods and containers. Fundamentally, KubeVirt virtual machines just look like any other containerized application to the rest of the cluster. It’s our KubeVirt API and control plane that make these containerized virtual machines behave like you’d expect from using other virtual machine management platforms. The payoff from running virtual machines within a Kubernetes Pod has been huge for us. There’s an entire ecosystem that continues to grow around how to provide pods with access to networks, storage, host devices, cpu, memory, and more. This means every time a problem or feature is added to pods, it’s yet another tool we can use for virtual machines. Here are a few examples of how pod features meet the needs of virtual machines as well. Storage: Virtual machines need persistent disks. Users should be able to stop a VM, start a VM, and have the data persist. There’s a Kubernetes storage abstraction called a PVC (persistent volume claim) that allows persistent storage to be attached to a pod. This means by placing the virtual machine in a pod, we can use the existing PVC mechanisms of delivering persistent storage to deliver our virtual machine disks. Network: Virtual machines need access to cluster networking. Pods are provided network interfaces that tie directly into the pod network via CNI. We can give a virtual machine running in a pod access to the pod network using the default CNI allocated network interfaces already present in the pod’s environment. CPU/Memory: Users need the ability to assign cpu and memory resources to Virtual machines. We can assign cpu and memory to pods using the resource requests/limits on the pod spec. This means through the use of pod resource requests/limits we are able to assign resources directly to virtual machines as well. This list goes on and on. As problems are solved for pods, KubeVirt leverages the solution and translates it to the virtual machine equivalent. The Declarative KubeVirt Virtualization API: While a KubeVirt virtual machine runs within a pod, that doesn’t change the fact that people working with virtual machines have a different set of expectations for how virtual machines should work compared to how pods are managed. Here’s the conflict. Pods are mortal workloads. A pod is declared by posting it’s manifest to the cluster, the pod runs once to completion, and that’s it. It’s done. Virtual machines are immortal workloads. A virtual machine doesn’t just run once to completion. Virtual machines have state. They can be started, stopped, and restarted any number of times. Virtual machines have concepts like live migration as well. Furthermore if the node a virtual machine is running on dies, the expectation is for that exact same virtual machine to resurrect on another node maintaining its state. So, pods run once and virtual machines live forever. How do we reconcile the two? Our solution came from taking a play directly out of the Kubernetes playbook. The Kubernetes core apis have this concept of layering objects on top of one another through the use of workload controllers. For example, the Kubernetes ReplicaSet is a workload controller layered on top of pods. The ReplicaSet controller manages ensuring that there are always ‘x’ number of pod replicas running within the cluster. If a ReplicaSet object declares that 5 pod replicas should be running, but a node dies bringing that total to 4, then the ReplicaSet workload controller manages spinning up a 5th pod in order to meet the declared replica count. The workload controller is always reconciling on the ReplicaSet objects desired state. Using this established Kubernetes pattern of layering objects on top of one another, we came up with our own virtualization specific API and corresponding workload controller called a “VirtualMachine” (big surprise there on the name, right?). Users declare a VirtualMachine object just like they would a pod by posting the VirtualMachine object’s manifest to the cluster. The big difference here that deviates from how pods are managed is that we allow VirtualMachine objects to be declared to exist in different states. For example, you can declare you want to “start” a virtual machine by setting “runStrategy: Always” on the VirtualMachine object’s spec. Likewise you can declare you want to “stop” a virtual machine by setting “runStrategy: Halted” on the VirtualMachine object’s spec. Behind the scenes, setting the “runStrategy” field results in the workload controller creating or deleting a pod for the virtual machine to live in. In the end, we essentially created the concept of an immortal VirtualMachine by laying our own custom API on top of mortal pods. Our API and controller knows how to resurrect a “stopped” VirtualMachine by constructing a pod with all the right network, storage volumes, cpu, and memory attached to in order to accurately bring the VirtualMachine back to life with the exact same state it stopped with. " }, { "id": 70, "url": "/2020/changelog-v0.28.0.html", @@ -832,7 +832,7 @@

    "title": "KubeVirt: installing Microsoft Windows from an ISO", "author" : "Pedro Ibáñez Requena", "tags" : "kubevirt, kubernetes, virtual machine, Microsoft Windows kubernetes, Microsoft Windows container, Windows", - "body": "Warning! While this post still contains valuable information, a lot of it is outdated. For more up-to-date information, including Windows 11 installation, please refer to this post Hello! nowadays each operating system vendor has its cloud image available to download ready to import and deploy a new Virtual Machine (VM) inside Kubernetes with KubeVirt,but what if you want to follow the traditional way of installing a VM using an existing iso attached as a CD-ROM? In this blogpost, we are going to explain how to prepare that VM with the ISO file and the needed drivers to proceed with the installation of Microsoft Windows. Pre-requisites: A Kubernetes cluster is already up and running KubeVirt and CDI are already installed There is enough free CPU, Memory and disk space in the cluster to deploy a Microsoft Windows VM, in this example, the version 2012 R2 VM is going to be usedPreparation: To proceed with the Installation steps the different elements involved are listed: NOTE No need for executing any command until the Installation section. An empty KubeVirt Virtual Machine apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: running: false template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: . . . machine: type: q35 resources: requests: memory: 8G volumes: . . . A PVC with the Microsoft Windows ISO file attached as CD-ROM to the VM, would be automatically created with the virtctl command when uploading the file First thing here is to download the ISO file of the Microsoft Windows, for that the Microsoft Evaluation Center offersthe ISO files to download for evaluation purposes: To be able to start the evaluation some personal data has to be filled in. Afterwards, the architecture to be checked is “64 bit” and the language selected as shown inthe following picture: Once the ISO file is downloaded it has to be uploaded with virtctl, the parameters used in this example are the following: image-upload: Upload a VM image to a PersistentVolumeClaim --image-path: The path of the ISO file --pvc-name: The name of the PVC to store the ISO file, in this example is iso-win2k12 --access-mode: the access mode for the PVC, in the example ReadOnlyMany has been used. --pvc-size: The size of the PVC, is where the ISO will be stored, in this case, the ISO is 4. 3G so a PVC OS 5G should be enough --uploadproxy-url: The URL of the cdi-upload proxy service, in the following example, the CLUSTER-IP is 10. 96. 164. 35 and the PORT is 443 Information To upload data to the cluster, the cdi-uploadproxy service must be accessible from outside the cluster. In a production environment, this probably involves setting up an Ingress or a LoadBalancer Service. $ kubectl get services -n cdi NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cdi-api ClusterIP 10. 96. 117. 29 <none> 443/TCP 6d18h cdi-uploadproxy ClusterIP 10. 96. 164. 35 <none> 443/TCP 6d18hIn this example the ISO file was copied to the Kubernetes node, to allow the virtctl to find it and to simplify the operation. --insecure: Allow insecure server connections when using HTTPS --wait-secs: The time in seconds to wait for upload pod to start. (default 60)The final command with the parameters and the values would look like: $ virtctl image-upload \ --image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \ --pvc-name=iso-win2k12 \ --access-mode=ReadOnlyMany \ --pvc-size=5G \ --uploadproxy-url=https://10. 96. 164. 35:443 \ --insecure \ --wait-secs=240 A PVC for the hard drive where the Operating System is going to be installed, in this example it is called winhd and the space requested is 15Gi: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnceresources: requests: storage: 15GistorageClassName: hostpath A container with the virtio drivers attached as a CD-ROM to the VM. The container image has to be pulled to have it available in the local registry. docker pull kubevirt/virtio-container-disk And also it has to be referenced in the VM YAML, in this example the name for the containerDisk is virtiocontainerdisk. - disk: bus: sata name: virtiocontainerdisk---- containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk If the pre-requisites are fulfilled, the final YAML (win2k12. yml), will look like: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: hostpathapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: running: false template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: disks: - bootOrder: 1 cdrom: bus: sata name: cdromiso - disk: bus: virtio name: harddrive - cdrom: bus: sata name: virtiocontainerdisk machine: type: q35 resources: requests: memory: 8G volumes: - name: cdromiso persistentVolumeClaim: claimName: iso-win2k12 - name: harddrive persistentVolumeClaim: claimName: winhd - containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk Information Special attention to the bootOrder: 1 parameter in the first disk as it is the volume containing the ISO and it has to be marked as the first device to boot from. Installation: To proceed with the installation the commands commented above are going to be executed: Uploading the ISO file to the PVC: $ virtctl image-upload \--image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \--pvc-name=iso-win2k12 \--access-mode=ReadOnlyMany \--pvc-size=5G \--uploadproxy-url=https://10. 96. 164. 35:443 \--insecure \--wait-secs=240DataVolume default/iso-win2k12 createdWaiting for PVC iso-win2k12 upload pod to be ready. . . Pod now readyUploading data to https://10. 96. 164. 35:4434. 23 GiB / 4. 23 GiB [=======================================================================================================================================================================] 100. 00% 1m21sUploading /root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO completed successfully Pulling the virtio container image to the locally: $ docker pull kubevirt/virtio-container-diskUsing default tag: latestTrying to pull repository docker. io/kubevirt/virtio-container-disk . . . latest: Pulling from docker. io/kubevirt/virtio-container-diskDigest: sha256:7e5449cb6a4a9586a3cd79433eeaafd980cb516119c03e499492e1e37965fe82Status: Image is up to date for docker. io/kubevirt/virtio-container-disk:latest Creating the PVC and Virtual Machine definitions: $ kubectl create -f win2k12. ymlvirtualmachine. kubevirt. io/win2k12-iso configuredpersistentvolumeclaim/winhd created Starting the Virtual Machine Instance: $ virtctl start win2k12-isoVM win2k12-iso was scheduled to start$ kubectl get vmiNAME AGE PHASE IP NODENAMEwin2k12-iso 82s Running 10. 244. 0. 53 master-00. kubevirt-io Once the status of the VMI is RUNNING it’s time to connect using VNC: virtctl vnc win2k12-iso Here is important to comment that to be able to connect through VNC using virtctl it’s necessary to reach the Kubernetes API. The following video shows how to go through the Microsoft Windows installation process: Once the Virtual Machine is created, the PVC with the ISO and the virtio drivers can be unattached from the Virtual Machine. References: KubeVirt user-guide: Virtio Windows Driver disk usage Creating a registry image with a VM disk CDI Upload User Guide KubeVirt user-guide: How to obtain virtio drivers?" + "body": "Warning! While this post still contains valuable information, a lot of it is outdated. For more up-to-date information, including Windows 11 installation, please refer to this post Hello! nowadays each operating system vendor has its cloud image available to download ready to import and deploy a new Virtual Machine (VM) inside Kubernetes with KubeVirt,but what if you want to follow the traditional way of installing a VM using an existing iso attached as a CD-ROM? In this blogpost, we are going to explain how to prepare that VM with the ISO file and the needed drivers to proceed with the installation of Microsoft Windows. Pre-requisites: A Kubernetes cluster is already up and running KubeVirt and CDI are already installed There is enough free CPU, Memory and disk space in the cluster to deploy a Microsoft Windows VM, in this example, the version 2012 R2 VM is going to be usedPreparation: To proceed with the Installation steps the different elements involved are listed: NOTE No need for executing any command until the Installation section. An empty KubeVirt Virtual Machine apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: runStrategy: Halted template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: . . . machine: type: q35 resources: requests: memory: 8G volumes: . . . A PVC with the Microsoft Windows ISO file attached as CD-ROM to the VM, would be automatically created with the virtctl command when uploading the file First thing here is to download the ISO file of the Microsoft Windows, for that the Microsoft Evaluation Center offersthe ISO files to download for evaluation purposes: To be able to start the evaluation some personal data has to be filled in. Afterwards, the architecture to be checked is “64 bit” and the language selected as shown inthe following picture: Once the ISO file is downloaded it has to be uploaded with virtctl, the parameters used in this example are the following: image-upload: Upload a VM image to a PersistentVolumeClaim --image-path: The path of the ISO file --pvc-name: The name of the PVC to store the ISO file, in this example is iso-win2k12 --access-mode: the access mode for the PVC, in the example ReadOnlyMany has been used. --pvc-size: The size of the PVC, is where the ISO will be stored, in this case, the ISO is 4. 3G so a PVC OS 5G should be enough --uploadproxy-url: The URL of the cdi-upload proxy service, in the following example, the CLUSTER-IP is 10. 96. 164. 35 and the PORT is 443 Information To upload data to the cluster, the cdi-uploadproxy service must be accessible from outside the cluster. In a production environment, this probably involves setting up an Ingress or a LoadBalancer Service. $ kubectl get services -n cdi NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cdi-api ClusterIP 10. 96. 117. 29 <none> 443/TCP 6d18h cdi-uploadproxy ClusterIP 10. 96. 164. 35 <none> 443/TCP 6d18hIn this example the ISO file was copied to the Kubernetes node, to allow the virtctl to find it and to simplify the operation. --insecure: Allow insecure server connections when using HTTPS --wait-secs: The time in seconds to wait for upload pod to start. (default 60)The final command with the parameters and the values would look like: $ virtctl image-upload \ --image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \ --pvc-name=iso-win2k12 \ --access-mode=ReadOnlyMany \ --pvc-size=5G \ --uploadproxy-url=https://10. 96. 164. 35:443 \ --insecure \ --wait-secs=240 A PVC for the hard drive where the Operating System is going to be installed, in this example it is called winhd and the space requested is 15Gi: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnceresources: requests: storage: 15GistorageClassName: hostpath A container with the virtio drivers attached as a CD-ROM to the VM. The container image has to be pulled to have it available in the local registry. docker pull kubevirt/virtio-container-disk And also it has to be referenced in the VM YAML, in this example the name for the containerDisk is virtiocontainerdisk. - disk: bus: sata name: virtiocontainerdisk---- containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk If the pre-requisites are fulfilled, the final YAML (win2k12. yml), will look like: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: hostpathapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: runStrategy: Halted template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: disks: - bootOrder: 1 cdrom: bus: sata name: cdromiso - disk: bus: virtio name: harddrive - cdrom: bus: sata name: virtiocontainerdisk machine: type: q35 resources: requests: memory: 8G volumes: - name: cdromiso persistentVolumeClaim: claimName: iso-win2k12 - name: harddrive persistentVolumeClaim: claimName: winhd - containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk Information Special attention to the bootOrder: 1 parameter in the first disk as it is the volume containing the ISO and it has to be marked as the first device to boot from. Installation: To proceed with the installation the commands commented above are going to be executed: Uploading the ISO file to the PVC: $ virtctl image-upload \--image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \--pvc-name=iso-win2k12 \--access-mode=ReadOnlyMany \--pvc-size=5G \--uploadproxy-url=https://10. 96. 164. 35:443 \--insecure \--wait-secs=240DataVolume default/iso-win2k12 createdWaiting for PVC iso-win2k12 upload pod to be ready. . . Pod now readyUploading data to https://10. 96. 164. 35:4434. 23 GiB / 4. 23 GiB [=======================================================================================================================================================================] 100. 00% 1m21sUploading /root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO completed successfully Pulling the virtio container image to the locally: $ docker pull kubevirt/virtio-container-diskUsing default tag: latestTrying to pull repository docker. io/kubevirt/virtio-container-disk . . . latest: Pulling from docker. io/kubevirt/virtio-container-diskDigest: sha256:7e5449cb6a4a9586a3cd79433eeaafd980cb516119c03e499492e1e37965fe82Status: Image is up to date for docker. io/kubevirt/virtio-container-disk:latest Creating the PVC and Virtual Machine definitions: $ kubectl create -f win2k12. ymlvirtualmachine. kubevirt. io/win2k12-iso configuredpersistentvolumeclaim/winhd created Starting the Virtual Machine Instance: $ virtctl start win2k12-isoVM win2k12-iso was scheduled to start$ kubectl get vmiNAME AGE PHASE IP NODENAMEwin2k12-iso 82s Running 10. 244. 0. 53 master-00. kubevirt-io Once the status of the VMI is RUNNING it’s time to connect using VNC: virtctl vnc win2k12-iso Here is important to comment that to be able to connect through VNC using virtctl it’s necessary to reach the Kubernetes API. The following video shows how to go through the Microsoft Windows installation process: Once the Virtual Machine is created, the PVC with the ISO and the virtio drivers can be unattached from the Virtual Machine. References: KubeVirt user-guide: Virtio Windows Driver disk usage Creating a registry image with a VM disk CDI Upload User Guide KubeVirt user-guide: How to obtain virtio drivers?" }, { "id": 75, "url": "/2020/changelog-v0.26.0.html", @@ -1028,7 +1028,7 @@

    "title": "How to import VM into KubeVirt", "author" : "DirectedSoul", "tags" : "cdi, vm import", - "body": "Introduction: Kubernetes has become the new way to orchestrate the containers and to handle the microservices, but what if I already have applications running on my old VM’s in my datacenter ? Can those apps ever be made k8s friendly ? Well, if that is the use-case for you, then we have a solution with KubeVirt! In this blog post we will show you how to deploy a VM as a yaml template and the required steps on how to import it as a PVC onto your kubernetes environment using the CDI and KubeVirt add-ons. Assumptions: A basic understanding of the k8s architecture: In its simplest terms Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. For complete details check Kubernetes-architecture User is familiar with the concept of a Libvirt based VM PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. Feel free to check more on Persistent Volume(PV). Persistent Volume Claim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Feel free to check more on Persistent Volume Claim(PVC). User is familiar with the concept of KubeVirt-architecture and CDI-architecture User has already installed KubeVirt in an available K8s environment, if not please follow the link Installing KubeVirt to further proceed. User is already familiar with VM operation with Kubernetes, for a refresher on how to use ‘Virtual Machines’ in Kubernetes, please do check LAB 1 before proceeding. Creating Virtual Machines from local images with CDI and virtctl: The Containerized Data Importer (CDI) project provides facilities for enabling Persistent Volume Claims (PVCs) to be used as disks for KubeVirt VMs. The three main CDI use cases are: Import a disk image from a URL to a PVC (HTTP/S3) Clone an existing PVC Upload a local disk image to a PVCThis document covers the third use case and covers the HTTP based import use case at the end of this post. NOTE: You should have CDI installed in your cluster, a VM disk that you’d like to upload, and virtctl in your path Please follow the instructions for the installation of CDI (v1. 9. 0 as of this writing) Expose cdi-uploadproxy service: The cdi-uploadproxy service must be accessible from outside the cluster. Here are some ways to do that: NodePort Service Ingress RouteWe can take a look at example manifests here The supported image formats are: . img . iso . qcow2 Compressed (. tar, . gz or . xz) of the above formats. We will use this image from CirrOS Project (in . img format) We can use virtctl command for uploading the image as shown below: virtctl image-upload --helpUpload a VM image to a PersistentVolumeClaim. Usage: virtctl image-upload [flags]Examples: # Upload a local disk image to a newly created PersistentVolumeClaim: virtctl image-upload --uploadproxy-url=https://cdi-uploadproxy. mycluster. com --pvc-name=upload-pvc --pvc-size=10Gi --image-path=/images/fedora28. qcow2Flags: --access-mode string The access mode for the PVC. (default ReadWriteOnce ) -h, --help help for image-upload --image-path string Path to the local VM image. --insecure Allow insecure server connections when using HTTPS. --no-create Don't attempt to create a new PVC. --pvc-name string The destination PVC. --pvc-size string The size of the PVC to create (ex. 10Gi, 500Mi). --storage-class string The storage class for the PVC. --uploadproxy-url string The URL of the cdi-upload proxy service. --wait-secs uint Seconds to wait for upload pod to start. (default 60)Use virtctl options for a list of global command-line options (applies to all commands). Creation of VirtualMachineInstance from a PVC: Here, virtctl image-upload works by creating a PVC of the requested size, sending an UploadTokenRequest to the cdi-apiserver, and uploading the file to the cdi-uploadproxy. virtctl image-upload --pvc-name=cirros-vm-disk --pvc-size=500Mi --image-path=/home/shegde/images/cirros-0. 4. 0-x86_64-disk. img --uploadproxy-url=<url to upload proxy service>The data inside are ephemeral meaning is lost when the VM restarts, in order to prevent that, and provide a persistent data storage, we use PVC (persistentVolumeClaim) which allows connecting a PersistentVolumeClaim to a VM disk. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancemetadata: name: cirros-vmspec: domain: devices: disks: - disk: bus: virtio name: pvcdisk machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - name: pvcdisk persistentVolumeClaim: claimName: cirros-vm-diskstatus: {}EOFA PersistentVolume can be in filesystem or block mode: Filesystem: For KubeVirt to be able to consume the disk present on a PersistentVolume’s filesystem, the disk must be named disk. img and be placed in the root path of the filesystem. Currently the disk is also required to be in raw format. Important: The disk. img image file needs to be owned by the user-id 107 in order to avoid permission issues. Additionally, if the disk. img image file has not been created manually before starting a VM then it will be created automatically with the PersistentVolumeClaim size. Since not every storage provisioner provides volumes with the exact usable amount of space as requested (e. g. due to filesystem overhead), KubeVirt tolerates up to 10% less available space. This can be configured with the pvc-tolerate-less-space-up-to-percent value in the kubevirt-config ConfigMap. Block: Use a block volume for consuming raw block devices. To do that, BlockVolume feature gate must be enabled. A simple example which attaches a PersistentVolumeClaim as a disk may look like this: metadata: name: testvmi-pvcapiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancespec: domain: resources: requests: memory: 64M devices: disks: - name: mypvcdisk lun: {} volumes: - name: mypvcdisk persistentVolumeClaim: claimName: mypvcCreation with a DataVolume: DataVolumes are a way to automate importing virtual machine disks onto pvc’s during the virtual machine’s launch flow. Without using a DataVolume, users have to prepare a pvc with a disk image before assigning it to a VM or VMI manifest. With a DataVolume, both the pvc creation and import is automated on behalf of the user. DataVolume VM Behavior: DataVolumes can be defined in the VM spec directly by adding the DataVolumes to the dataVolumeTemplates list. Below is an example. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm-alpine-datavolume name: vm-alpine-datavolumespec: running: false template: metadata: labels: kubevirt. io/vm: vm-alpine-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 resources: requests: memory: 64M volumes: - dataVolume: #Note the type is dataVolume name: alpine-dv name: datavolumedisk1 dataVolumeTemplates: # Automatically a PVC of size 2Gi is created - metadata: name: alpine-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: #This is the source where the ISO file resides http: url: http://cdi-http-import-server. kubevirt/images/alpine. isoFrom the above manifest the two main sections that needs an attention are source and pvc. The source part declares that there is a disk image living on an http server that we want to use as a volume for this VM. The pvc part declares the spec that should be used to create the pvc that hosts the source data. When this VM manifest is posted to the cluster, as part of the launch flow a pvc will be created using the spec provided and the source data will be automatically imported into that pvc before the VM starts. When the VM is deleted, the storage provisioned by the DataVolume will automatically be deleted as well. A few caveats to be considered before using DataVolumes: A DataVolume is a custom resource provided by the Containerized Data Importer (CDI) project. KubeVirt integrates with CDI in order to provide users a workflow for dynamically creating pvcs and importing data into those pvcs. In order to take advantage of the DataVolume volume source on a VM or VMI, the DataVolumes feature gate must be enabled in the kubevirt-config config map before KubeVirt is installed. CDI must also be installed(follow the steps as mentioned above). Enabling the DataVolumes feature gate: Below is an example of how to enable DataVolume support using the kubevirt-config config map. cat <<EOF | kubectl create -f -apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: DataVolumes EOFThis config map assumes KubeVirt will be installed in the KubeVirt namespace. Change the namespace to suit your installation. First post the configmap above, then install KubeVirt. At that point DataVolume integration will be enabled. Wrap-up: As demonstrated, VM can be imported as a k8s object using a CDI project along with KubeVirt. For more detailed insights, please feel free to follow the KubeVirt project. " + "body": "Introduction: Kubernetes has become the new way to orchestrate the containers and to handle the microservices, but what if I already have applications running on my old VM’s in my datacenter ? Can those apps ever be made k8s friendly ? Well, if that is the use-case for you, then we have a solution with KubeVirt! In this blog post we will show you how to deploy a VM as a yaml template and the required steps on how to import it as a PVC onto your kubernetes environment using the CDI and KubeVirt add-ons. Assumptions: A basic understanding of the k8s architecture: In its simplest terms Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. For complete details check Kubernetes-architecture User is familiar with the concept of a Libvirt based VM PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. Feel free to check more on Persistent Volume(PV). Persistent Volume Claim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Feel free to check more on Persistent Volume Claim(PVC). User is familiar with the concept of KubeVirt-architecture and CDI-architecture User has already installed KubeVirt in an available K8s environment, if not please follow the link Installing KubeVirt to further proceed. User is already familiar with VM operation with Kubernetes, for a refresher on how to use ‘Virtual Machines’ in Kubernetes, please do check LAB 1 before proceeding. Creating Virtual Machines from local images with CDI and virtctl: The Containerized Data Importer (CDI) project provides facilities for enabling Persistent Volume Claims (PVCs) to be used as disks for KubeVirt VMs. The three main CDI use cases are: Import a disk image from a URL to a PVC (HTTP/S3) Clone an existing PVC Upload a local disk image to a PVCThis document covers the third use case and covers the HTTP based import use case at the end of this post. NOTE: You should have CDI installed in your cluster, a VM disk that you’d like to upload, and virtctl in your path Please follow the instructions for the installation of CDI (v1. 9. 0 as of this writing) Expose cdi-uploadproxy service: The cdi-uploadproxy service must be accessible from outside the cluster. Here are some ways to do that: NodePort Service Ingress RouteWe can take a look at example manifests here The supported image formats are: . img . iso . qcow2 Compressed (. tar, . gz or . xz) of the above formats. We will use this image from CirrOS Project (in . img format) We can use virtctl command for uploading the image as shown below: virtctl image-upload --helpUpload a VM image to a PersistentVolumeClaim. Usage: virtctl image-upload [flags]Examples: # Upload a local disk image to a newly created PersistentVolumeClaim: virtctl image-upload --uploadproxy-url=https://cdi-uploadproxy. mycluster. com --pvc-name=upload-pvc --pvc-size=10Gi --image-path=/images/fedora28. qcow2Flags: --access-mode string The access mode for the PVC. (default ReadWriteOnce ) -h, --help help for image-upload --image-path string Path to the local VM image. --insecure Allow insecure server connections when using HTTPS. --no-create Don't attempt to create a new PVC. --pvc-name string The destination PVC. --pvc-size string The size of the PVC to create (ex. 10Gi, 500Mi). --storage-class string The storage class for the PVC. --uploadproxy-url string The URL of the cdi-upload proxy service. --wait-secs uint Seconds to wait for upload pod to start. (default 60)Use virtctl options for a list of global command-line options (applies to all commands). Creation of VirtualMachineInstance from a PVC: Here, virtctl image-upload works by creating a PVC of the requested size, sending an UploadTokenRequest to the cdi-apiserver, and uploading the file to the cdi-uploadproxy. virtctl image-upload --pvc-name=cirros-vm-disk --pvc-size=500Mi --image-path=/home/shegde/images/cirros-0. 4. 0-x86_64-disk. img --uploadproxy-url=<url to upload proxy service>The data inside are ephemeral meaning is lost when the VM restarts, in order to prevent that, and provide a persistent data storage, we use PVC (persistentVolumeClaim) which allows connecting a PersistentVolumeClaim to a VM disk. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancemetadata: name: cirros-vmspec: domain: devices: disks: - disk: bus: virtio name: pvcdisk machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - name: pvcdisk persistentVolumeClaim: claimName: cirros-vm-diskstatus: {}EOFA PersistentVolume can be in filesystem or block mode: Filesystem: For KubeVirt to be able to consume the disk present on a PersistentVolume’s filesystem, the disk must be named disk. img and be placed in the root path of the filesystem. Currently the disk is also required to be in raw format. Important: The disk. img image file needs to be owned by the user-id 107 in order to avoid permission issues. Additionally, if the disk. img image file has not been created manually before starting a VM then it will be created automatically with the PersistentVolumeClaim size. Since not every storage provisioner provides volumes with the exact usable amount of space as requested (e. g. due to filesystem overhead), KubeVirt tolerates up to 10% less available space. This can be configured with the pvc-tolerate-less-space-up-to-percent value in the kubevirt-config ConfigMap. Block: Use a block volume for consuming raw block devices. To do that, BlockVolume feature gate must be enabled. A simple example which attaches a PersistentVolumeClaim as a disk may look like this: metadata: name: testvmi-pvcapiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancespec: domain: resources: requests: memory: 64M devices: disks: - name: mypvcdisk lun: {} volumes: - name: mypvcdisk persistentVolumeClaim: claimName: mypvcCreation with a DataVolume: DataVolumes are a way to automate importing virtual machine disks onto pvc’s during the virtual machine’s launch flow. Without using a DataVolume, users have to prepare a pvc with a disk image before assigning it to a VM or VMI manifest. With a DataVolume, both the pvc creation and import is automated on behalf of the user. DataVolume VM Behavior: DataVolumes can be defined in the VM spec directly by adding the DataVolumes to the dataVolumeTemplates list. Below is an example. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm-alpine-datavolume name: vm-alpine-datavolumespec: runStrategy: Halted template: metadata: labels: kubevirt. io/vm: vm-alpine-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 resources: requests: memory: 64M volumes: - dataVolume: #Note the type is dataVolume name: alpine-dv name: datavolumedisk1 dataVolumeTemplates: # Automatically a PVC of size 2Gi is created - metadata: name: alpine-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: #This is the source where the ISO file resides http: url: http://cdi-http-import-server. kubevirt/images/alpine. isoFrom the above manifest the two main sections that needs an attention are source and pvc. The source part declares that there is a disk image living on an http server that we want to use as a volume for this VM. The pvc part declares the spec that should be used to create the pvc that hosts the source data. When this VM manifest is posted to the cluster, as part of the launch flow a pvc will be created using the spec provided and the source data will be automatically imported into that pvc before the VM starts. When the VM is deleted, the storage provisioned by the DataVolume will automatically be deleted as well. A few caveats to be considered before using DataVolumes: A DataVolume is a custom resource provided by the Containerized Data Importer (CDI) project. KubeVirt integrates with CDI in order to provide users a workflow for dynamically creating pvcs and importing data into those pvcs. In order to take advantage of the DataVolume volume source on a VM or VMI, the DataVolumes feature gate must be enabled in the kubevirt-config config map before KubeVirt is installed. CDI must also be installed(follow the steps as mentioned above). Enabling the DataVolumes feature gate: Below is an example of how to enable DataVolume support using the kubevirt-config config map. cat <<EOF | kubectl create -f -apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: DataVolumes EOFThis config map assumes KubeVirt will be installed in the KubeVirt namespace. Change the namespace to suit your installation. First post the configmap above, then install KubeVirt. At that point DataVolume integration will be enabled. Wrap-up: As demonstrated, VM can be imported as a k8s object using a CDI project along with KubeVirt. For more detailed insights, please feel free to follow the KubeVirt project. " }, { "id": 103, "url": "/2019/website-roadmap.html", @@ -1098,7 +1098,7 @@

    "title": "More About Kubevirt Metrics", "author" : "fromanirh", "tags" : "metrics, prometheus", - "body": "More about KubeVirt and Prometheus metricsIn this blog post, we update about the KubeVirt metrics, continuing the series started earlier this year. Since the previous post, the initial groundwork and first set of metrics was merged, and it is expectedto be available with KubeVirt v0. 15. 0 and onwards. Make sure you followed the steps described in the previous post to set up properly the monitoring stackin your KubeVirt-powered cluster. New metrics: Let’s look at the initial set of metrics exposed by KubeVirt 0. 15. 0: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0-alpha. 0. 74+d7aaf3b5df4a60-dirty }kubevirt_vm_memory_resident_bytes{domain= $VM_NAME }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= rx }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= tx }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_vcpu_seconds{domain= $VM_NAME ,id= 0 ,state= 1 }The metrics expose versioning information according to the recommendations using the kubevirt_info metric; the other metrics should be self-explanatory. As we can expect, labels like domain, drive and interface depend on the specifics of the VM. type, however, is not and represents the subtype of the metric. Let’s now see a real life example, from this idle, diskless VM: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 name: vm-test-01spec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 spec: domain: devices: interfaces: - name: default bridge: {} machine: type: resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0status: {}Querying the endpoint (see below) yields something like kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 25984e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 613Example of how the kubevirt_vm_memory_resident_bytes metric looks like in the Prometheus UI Accessing the metrics programmatically: We can access the VM metrics using the standard Prometheus API. For example, let’s get the same data about the memory consumption we have seen above in the Prometheus UI. The following query yields all the data for the year 2019, aggregated every two hours. Not much data in this case, but beware of potentially large result sets. curl -g 'http://$CLUSTER_IP:9090/api/v1/query_range?query=kubevirt_vm_memory_resident_bytes&start=2019-01-01T00:00:00. 001Z&end=2019-12-31T23:59:59. 999Z&step=7200s' | json_ppWhich yields something like { data : { resultType : matrix , result : [ { values : [ [1552514400. 001, 44036096 ], [1552521600. 001, 42348544 ], [1552528800. 001, 44040192 ], [1552536000. 001, 42291200 ], [1552543200. 001, 42450944 ], [1552550400. 001, 43315200 ] ], metric : { __name__ : kubevirt_vm_memory_resident_bytes , job : kubevirt-prometheus-metrics , endpoint : metrics , pod : virt-handler-6ng6j , domain : default_vm-test-01 , instance : 10. 244. 0. 29:8443 , service : kubevirt-prometheus-metrics , namespace : kubevirt } } ] }, status : success }Troubleshooting tips: We strive to make the monitoring experience seamless, streamlined and working out of the box, but the stack is still evolving fast,and there are many options to actually set up the monitoring stack. Here we present some troubleshooting tips for the most common issues. prometheus targets: An underused feature of the Prometheus server is the target configuration. The Prometehus server exposes data about the targets it islooking for, so we can easily asses if the Prometheus server knows that it must scrape the kubevirt endpoints for metrics. We can see this both in the Prometheus UI: Or programmatically, with the Prometheus REST API: curl -g 'http://192. 168. 48. 7:9090/api/v1/targets' | json_pp(output trimmed for brevity): { data : { activeTargets : [ { lastError : , lastScrape : 2019-03-14T13:38:52. 886262669Z , scrapeUrl : https://10. 244. 0. 72:8443/metrics , labels : { service : kubevirt-prometheus-metrics , instance : 10. 244. 0. 72:8443 , job : kubevirt-prometheus-metrics , pod : virt-handler-6ng6j , endpoint : metrics , namespace : kubevirt }, discoveredLabels : { __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : kubevirt-prometheus-metrics , __meta_kubernetes_endpoint_address_target_name : virt-handler-6ng6j , __meta_kubernetes_service_name : kubevirt-prometheus-metrics , __meta_kubernetes_pod_label_pod_template_generation : 1 , __meta_kubernetes_endpoint_port_name : metrics , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-handler-6ng6j , __address__ : 10. 244. 0. 72:8443 , __meta_kubernetes_pod_container_name : virt-handler , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_pod_controller_kind : DaemonSet , __meta_kubernetes_pod_label_kubevirt_io : virt-handler , __meta_kubernetes_pod_label_controller_revision_hash : 7bc9c7665b , __meta_kubernetes_pod_container_port_name : metrics , __meta_kubernetes_pod_ready : true , __scheme__ : https , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __meta_kubernetes_pod_container_port_protocol : TCP , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __metrics_path__ : /metrics , __meta_kubernetes_pod_controller_name : virt-handler , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_service_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_uid : 7d65f67a-45c8-11e9-8567-5254000be9ec , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : , __meta_kubernetes_pod_ip : 10. 244. 0. 72 , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 }, health : up } ], droppedTargets : [ { discoveredLabels : { __meta_kubernetes_service_name : virt-api , __meta_kubernetes_endpoint_address_target_name : virt-api-649859444c-dnvnm , __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : virt-api , __meta_kubernetes_pod_container_name : virt-api , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-api-649859444c-dnvnm , __address__ : 10. 244. 0. 59:8443 , __meta_kubernetes_endpoint_port_name : , __meta_kubernetes_pod_container_port_name : virt-api , __meta_kubernetes_pod_ready : true , __meta_kubernetes_pod_label_kubevirt_io : virt-api , __meta_kubernetes_pod_controller_kind : ReplicaSet , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __scheme__ : https , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_container_port_protocol : TCP , __metrics_path__ : /metrics , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_pod_controller_name : virt-api-649859444c , __meta_kubernetes_pod_label_pod_template_hash : 649859444c , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : virt-api , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_ip : 10. 244. 0. 59 , __meta_kubernetes_pod_uid : 7d5c3299-45c8-11e9-8567-5254000be9ec } } ] }, status : success }The Prometheus target state gives us a very useful information that shapes the next steps during the troubleshooting: does the Prometheus server know it should scrape our target? If no, we should check the Prometheus configuration, which is, in our case, driven by the Prometheus operator. Otherwise: can the Prometheus server access the endpoint? If no, we need to check the network connectivity/DNS configuration, or the endpoint itselfservicemonitors: servicemonitors are the objects the prometheus-operator consume to produce the right prometheus configuration that the server running in the clusterwill consume to scrape the metrics endpoints. See the documentation for all the details. We describe two of the most common pitfalls. create the servicemonitor in the right namespace: KubeVirt services run in the kubevirt namespace. Make sure to create the servicemonitor in the same namespace: kubectl get pods -n kubevirtNAME READY STATUS RESTARTS AGEvirt-api-649859444c-dnvnm 1/1 Running 2 19hvirt-api-649859444c-j9d94 1/1 Running 2 19hvirt-controller-7f49b8f77c-8kh46 1/1 Running 2 19hvirt-controller-7f49b8f77c-qk4hq 1/1 Running 2 19hvirt-handler-6ng6j 1/1 Running 2 19hvirt-operator-6c5db798d4-wr9wl 1/1 Running 6 19hkubectl get servicemonitor -n kubevirtNAME AGEkubevirt 16hActually, the servicemonitor should be created in the same namespace on which the kubevirt-prometheus-metrics service is defined: kubectl get svc -n kubevirtNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubevirt-prometheus-metrics ClusterIP 10. 109. 85. 101 <none> 443/TCP 19hvirt-api ClusterIP 10. 109. 162. 102 <none> 443/TCP 19hSee the KubeVirt documentation for all the details. configure the Prometheus instance to look in the right namespace: The prometheus server instance(s) run by default in their own namespace; this is the recommended configuration, and running them in the same kubevirt namespaceis not recommended anyway. So, make sure that the prometheus configuration we use looks in all the relevant namespaces, using something like apiVersion: monitoring. coreos. com/v1kind: Prometheusmetadata: name: prometheusspec: serviceAccountName: prometheus serviceMonitorNamespaceSelector: matchLabels: prometheus. kubevirt. io: serviceMonitorSelector: matchLabels: prometheus. kubevirt. io: resources: requests: memory: 400MiPlease note the usage of the serviceMonitorNamespaceSelector. See here and herefor more details. Namespaces must have the right label, prometheus. kubevirt. io, to be searched for servicemonitors. The kubevirt namespace is, of course, set correctly by default apiVersion: v1kind: Namespacemetadata: creationTimestamp: 2019-03-13T19:43:25Z labels: kubevirt. io: prometheus. kubevirt. io: name: kubevirt resourceVersion: 228178 selfLink: /api/v1/namespaces/kubevirt uid: 44a0783f-45c8-11e9-8567-5254000be9ecspec: finalizers: - kubernetesstatus: phase: ActiveBut please make sure that any other namespace you may want to monitor has the correct label. endpoint state: As in KubeVirt 0. 15. 0, virt-handler is the component which exposes the VM metrics through its Prometheus endpoint. Let’s check it reports the data correctly. First, let’s get the virt-handler IP address. We look out the instance we want to check with kubectl get pods -n kubevirtThen we query the address: kubectl get pod -o json -n KubeVirt $VIRT_HANDLER_POD | jq -r '. status. podIP'Prometheus tooling adds lots of metrics about internal state. In this case we care only about kubevirt-related metrics, so we filter out everything else with something like grep -E '^kubevirt_'Putting all together: curl -s -k -L https://$(kubectl get pod -o json -n KubeVirt virt-handler-6ng6j | jq -r '. status. podIP'):8443/metrics | grep -E '^kubevirt_'Let’s see how a healthy output looks like: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 1168896e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 5173Please remember that some metrics can be correctly omitted for some VMs. In general, we should always see metrics about version (pseudo metric), memory, network, and CPU. But there are known cases on which not having storage metrics is expected and correct: for example this case, since we are using a diskless VM. Coming next: The KubeVirt team is still working to enhance and refine the metrics support. There are two main active topics. First, discussion is ongoing about adding more metrics,depending on the needs of the community or the needs of the ecosystem. Furthermore, there is work in progress to increase the robustnessand the reliability of the monitoring. We also have plans to improve the integration with kubernetes. Stay tuned for more updates! " + "body": "More about KubeVirt and Prometheus metricsIn this blog post, we update about the KubeVirt metrics, continuing the series started earlier this year. Since the previous post, the initial groundwork and first set of metrics was merged, and it is expectedto be available with KubeVirt v0. 15. 0 and onwards. Make sure you followed the steps described in the previous post to set up properly the monitoring stackin your KubeVirt-powered cluster. New metrics: Let’s look at the initial set of metrics exposed by KubeVirt 0. 15. 0: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0-alpha. 0. 74+d7aaf3b5df4a60-dirty }kubevirt_vm_memory_resident_bytes{domain= $VM_NAME }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= rx }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= tx }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_vcpu_seconds{domain= $VM_NAME ,id= 0 ,state= 1 }The metrics expose versioning information according to the recommendations using the kubevirt_info metric; the other metrics should be self-explanatory. As we can expect, labels like domain, drive and interface depend on the specifics of the VM. type, however, is not and represents the subtype of the metric. Let’s now see a real life example, from this idle, diskless VM: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 name: vm-test-01spec: runStrategy: Halted template: metadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 spec: domain: devices: interfaces: - name: default bridge: {} machine: type: resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0status: {}Querying the endpoint (see below) yields something like kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 25984e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 613Example of how the kubevirt_vm_memory_resident_bytes metric looks like in the Prometheus UI Accessing the metrics programmatically: We can access the VM metrics using the standard Prometheus API. For example, let’s get the same data about the memory consumption we have seen above in the Prometheus UI. The following query yields all the data for the year 2019, aggregated every two hours. Not much data in this case, but beware of potentially large result sets. curl -g 'http://$CLUSTER_IP:9090/api/v1/query_range?query=kubevirt_vm_memory_resident_bytes&start=2019-01-01T00:00:00. 001Z&end=2019-12-31T23:59:59. 999Z&step=7200s' | json_ppWhich yields something like { data : { resultType : matrix , result : [ { values : [ [1552514400. 001, 44036096 ], [1552521600. 001, 42348544 ], [1552528800. 001, 44040192 ], [1552536000. 001, 42291200 ], [1552543200. 001, 42450944 ], [1552550400. 001, 43315200 ] ], metric : { __name__ : kubevirt_vm_memory_resident_bytes , job : kubevirt-prometheus-metrics , endpoint : metrics , pod : virt-handler-6ng6j , domain : default_vm-test-01 , instance : 10. 244. 0. 29:8443 , service : kubevirt-prometheus-metrics , namespace : kubevirt } } ] }, status : success }Troubleshooting tips: We strive to make the monitoring experience seamless, streamlined and working out of the box, but the stack is still evolving fast,and there are many options to actually set up the monitoring stack. Here we present some troubleshooting tips for the most common issues. prometheus targets: An underused feature of the Prometheus server is the target configuration. The Prometehus server exposes data about the targets it islooking for, so we can easily asses if the Prometheus server knows that it must scrape the kubevirt endpoints for metrics. We can see this both in the Prometheus UI: Or programmatically, with the Prometheus REST API: curl -g 'http://192. 168. 48. 7:9090/api/v1/targets' | json_pp(output trimmed for brevity): { data : { activeTargets : [ { lastError : , lastScrape : 2019-03-14T13:38:52. 886262669Z , scrapeUrl : https://10. 244. 0. 72:8443/metrics , labels : { service : kubevirt-prometheus-metrics , instance : 10. 244. 0. 72:8443 , job : kubevirt-prometheus-metrics , pod : virt-handler-6ng6j , endpoint : metrics , namespace : kubevirt }, discoveredLabels : { __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : kubevirt-prometheus-metrics , __meta_kubernetes_endpoint_address_target_name : virt-handler-6ng6j , __meta_kubernetes_service_name : kubevirt-prometheus-metrics , __meta_kubernetes_pod_label_pod_template_generation : 1 , __meta_kubernetes_endpoint_port_name : metrics , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-handler-6ng6j , __address__ : 10. 244. 0. 72:8443 , __meta_kubernetes_pod_container_name : virt-handler , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_pod_controller_kind : DaemonSet , __meta_kubernetes_pod_label_kubevirt_io : virt-handler , __meta_kubernetes_pod_label_controller_revision_hash : 7bc9c7665b , __meta_kubernetes_pod_container_port_name : metrics , __meta_kubernetes_pod_ready : true , __scheme__ : https , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __meta_kubernetes_pod_container_port_protocol : TCP , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __metrics_path__ : /metrics , __meta_kubernetes_pod_controller_name : virt-handler , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_service_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_uid : 7d65f67a-45c8-11e9-8567-5254000be9ec , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : , __meta_kubernetes_pod_ip : 10. 244. 0. 72 , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 }, health : up } ], droppedTargets : [ { discoveredLabels : { __meta_kubernetes_service_name : virt-api , __meta_kubernetes_endpoint_address_target_name : virt-api-649859444c-dnvnm , __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : virt-api , __meta_kubernetes_pod_container_name : virt-api , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-api-649859444c-dnvnm , __address__ : 10. 244. 0. 59:8443 , __meta_kubernetes_endpoint_port_name : , __meta_kubernetes_pod_container_port_name : virt-api , __meta_kubernetes_pod_ready : true , __meta_kubernetes_pod_label_kubevirt_io : virt-api , __meta_kubernetes_pod_controller_kind : ReplicaSet , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __scheme__ : https , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_container_port_protocol : TCP , __metrics_path__ : /metrics , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_pod_controller_name : virt-api-649859444c , __meta_kubernetes_pod_label_pod_template_hash : 649859444c , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : virt-api , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_ip : 10. 244. 0. 59 , __meta_kubernetes_pod_uid : 7d5c3299-45c8-11e9-8567-5254000be9ec } } ] }, status : success }The Prometheus target state gives us a very useful information that shapes the next steps during the troubleshooting: does the Prometheus server know it should scrape our target? If no, we should check the Prometheus configuration, which is, in our case, driven by the Prometheus operator. Otherwise: can the Prometheus server access the endpoint? If no, we need to check the network connectivity/DNS configuration, or the endpoint itselfservicemonitors: servicemonitors are the objects the prometheus-operator consume to produce the right prometheus configuration that the server running in the clusterwill consume to scrape the metrics endpoints. See the documentation for all the details. We describe two of the most common pitfalls. create the servicemonitor in the right namespace: KubeVirt services run in the kubevirt namespace. Make sure to create the servicemonitor in the same namespace: kubectl get pods -n kubevirtNAME READY STATUS RESTARTS AGEvirt-api-649859444c-dnvnm 1/1 Running 2 19hvirt-api-649859444c-j9d94 1/1 Running 2 19hvirt-controller-7f49b8f77c-8kh46 1/1 Running 2 19hvirt-controller-7f49b8f77c-qk4hq 1/1 Running 2 19hvirt-handler-6ng6j 1/1 Running 2 19hvirt-operator-6c5db798d4-wr9wl 1/1 Running 6 19hkubectl get servicemonitor -n kubevirtNAME AGEkubevirt 16hActually, the servicemonitor should be created in the same namespace on which the kubevirt-prometheus-metrics service is defined: kubectl get svc -n kubevirtNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubevirt-prometheus-metrics ClusterIP 10. 109. 85. 101 <none> 443/TCP 19hvirt-api ClusterIP 10. 109. 162. 102 <none> 443/TCP 19hSee the KubeVirt documentation for all the details. configure the Prometheus instance to look in the right namespace: The prometheus server instance(s) run by default in their own namespace; this is the recommended configuration, and running them in the same kubevirt namespaceis not recommended anyway. So, make sure that the prometheus configuration we use looks in all the relevant namespaces, using something like apiVersion: monitoring. coreos. com/v1kind: Prometheusmetadata: name: prometheusspec: serviceAccountName: prometheus serviceMonitorNamespaceSelector: matchLabels: prometheus. kubevirt. io: serviceMonitorSelector: matchLabels: prometheus. kubevirt. io: resources: requests: memory: 400MiPlease note the usage of the serviceMonitorNamespaceSelector. See here and herefor more details. Namespaces must have the right label, prometheus. kubevirt. io, to be searched for servicemonitors. The kubevirt namespace is, of course, set correctly by default apiVersion: v1kind: Namespacemetadata: creationTimestamp: 2019-03-13T19:43:25Z labels: kubevirt. io: prometheus. kubevirt. io: name: kubevirt resourceVersion: 228178 selfLink: /api/v1/namespaces/kubevirt uid: 44a0783f-45c8-11e9-8567-5254000be9ecspec: finalizers: - kubernetesstatus: phase: ActiveBut please make sure that any other namespace you may want to monitor has the correct label. endpoint state: As in KubeVirt 0. 15. 0, virt-handler is the component which exposes the VM metrics through its Prometheus endpoint. Let’s check it reports the data correctly. First, let’s get the virt-handler IP address. We look out the instance we want to check with kubectl get pods -n kubevirtThen we query the address: kubectl get pod -o json -n KubeVirt $VIRT_HANDLER_POD | jq -r '. status. podIP'Prometheus tooling adds lots of metrics about internal state. In this case we care only about kubevirt-related metrics, so we filter out everything else with something like grep -E '^kubevirt_'Putting all together: curl -s -k -L https://$(kubectl get pod -o json -n KubeVirt virt-handler-6ng6j | jq -r '. status. podIP'):8443/metrics | grep -E '^kubevirt_'Let’s see how a healthy output looks like: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 1168896e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 5173Please remember that some metrics can be correctly omitted for some VMs. In general, we should always see metrics about version (pseudo metric), memory, network, and CPU. But there are known cases on which not having storage metrics is expected and correct: for example this case, since we are using a diskless VM. Coming next: The KubeVirt team is still working to enhance and refine the metrics support. There are two main active topics. First, discussion is ongoing about adding more metrics,depending on the needs of the community or the needs of the ecosystem. Furthermore, there is work in progress to increase the robustnessand the reliability of the monitoring. We also have plans to improve the integration with kubernetes. Stay tuned for more updates! " }, { "id": 113, "url": "/2019/changelog-v0.15.0.html", @@ -1168,7 +1168,7 @@

    "title": "Ignition Support", "author" : "karmab", "tags" : "ignition, coreos, rhcos", - "body": "Introduction: Ignition is a new provisioning utility designed specifically for CoreOS/RhCOS. At the most basic level, it is a tool for manipulating a node during early boot. This includes: Partitioning disks. Formatting partitions. Writing files (regular files, systemd units, networkd units). Configuring users and their associated ssh public keys. Recently, we added support for it in KubeVirt so ignition data can now be embedded in a vm specification, through a dedicated annotation. Ignition support is still needed in the guest operating system. Enabling Ignition Support: Ignition Support has to be enabled through a feature gate. This is achieved by creating (or editing ) the kubevirt-config ConfigMap in the kubevirt namespace. A minimal config map would look like this: apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: ExperimentalIgnitionSupportMake sure to delete kubevirt related pods afterward for the configuration to be taken into account: kubectl delete pod --all -n kubevirtWorkThrough: We assume that you already have a Kubernetes or OpenShift cluster running with KubeVirt installed. Step 1: Create The following VM spec in the file myvm1. yml: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: myvm1spec: running: true template: metadata: labels: kubevirt. io/size: small annotations: kubevirt. io/ignitiondata: | { ignition : { config : {}, version : 2. 2. 0 }, networkd : {}, passwd : { users : [ { name : core , sshAuthorizedKeys : [ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/AvM9VbO2yiIb9AillBp/kTr8jqIErRU1LFKqhwPTm4AtVIjFSaOuM4AlspfCUIz9IHBrDcZmbcYKai3lC3JtQic7M/a1OWUjWE1ML8CEvNsGPGu5yNVUQoWC0lmW5rzX9c6HvH8AcmfMmdyQ7SgcAnk0zir9jw8ed2TRAzHn3vXFd7+saZLihFJhXG4zB8vh7gJHjLfjIa3JHptWzW9AtqF9QsoBY/iu58Rf/hRnrfWscyN3x9pGCSEqdLSDv7HFuH2EabnvNFFQZr4J1FYzH/fKVY3Ppt3rf64UWCztDu7L44fPwwkI7nAzdmQVTaMoD3Ej8i7/OSFZsC2V5IBT kboumedh@bumblefoot ] }, ] } } spec: domain: devices: disks: - name: containerdisk disk: bus: virtio interfaces: - name: default bridge: {} resources: requests: memory: 64M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demoNote We simply inject the ignition data as a string in vm/spec/domain/spec/metadata/annotations, using kubevirt. io/ignitiondata as an annotation Step 2: Create the VM: $ kubectl apply -f myvm1. ymlvirtualmachine myvm1 createdAt this point, when VM boots, ignition data will be injected. How does it work under the hood?: We currently leverage Pass-through of arbitrary qemu commands although there is some discussion around using a metadata server instead Summary: Ignition Support brings the ability to run CoreOS/RHCOS distros on KubeVirt and to customize them at boot time. " + "body": "Introduction: Ignition is a new provisioning utility designed specifically for CoreOS/RhCOS. At the most basic level, it is a tool for manipulating a node during early boot. This includes: Partitioning disks. Formatting partitions. Writing files (regular files, systemd units, networkd units). Configuring users and their associated ssh public keys. Recently, we added support for it in KubeVirt so ignition data can now be embedded in a vm specification, through a dedicated annotation. Ignition support is still needed in the guest operating system. Enabling Ignition Support: Ignition Support has to be enabled through a feature gate. This is achieved by creating (or editing ) the kubevirt-config ConfigMap in the kubevirt namespace. A minimal config map would look like this: apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: ExperimentalIgnitionSupportMake sure to delete kubevirt related pods afterward for the configuration to be taken into account: kubectl delete pod --all -n kubevirtWorkThrough: We assume that you already have a Kubernetes or OpenShift cluster running with KubeVirt installed. Step 1: Create The following VM spec in the file myvm1. yml: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: myvm1spec: runStrategy: Always template: metadata: labels: kubevirt. io/size: small annotations: kubevirt. io/ignitiondata: | { ignition : { config : {}, version : 2. 2. 0 }, networkd : {}, passwd : { users : [ { name : core , sshAuthorizedKeys : [ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/AvM9VbO2yiIb9AillBp/kTr8jqIErRU1LFKqhwPTm4AtVIjFSaOuM4AlspfCUIz9IHBrDcZmbcYKai3lC3JtQic7M/a1OWUjWE1ML8CEvNsGPGu5yNVUQoWC0lmW5rzX9c6HvH8AcmfMmdyQ7SgcAnk0zir9jw8ed2TRAzHn3vXFd7+saZLihFJhXG4zB8vh7gJHjLfjIa3JHptWzW9AtqF9QsoBY/iu58Rf/hRnrfWscyN3x9pGCSEqdLSDv7HFuH2EabnvNFFQZr4J1FYzH/fKVY3Ppt3rf64UWCztDu7L44fPwwkI7nAzdmQVTaMoD3Ej8i7/OSFZsC2V5IBT kboumedh@bumblefoot ] }, ] } } spec: domain: devices: disks: - name: containerdisk disk: bus: virtio interfaces: - name: default bridge: {} resources: requests: memory: 64M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demoNote We simply inject the ignition data as a string in vm/spec/domain/spec/metadata/annotations, using kubevirt. io/ignitiondata as an annotation Step 2: Create the VM: $ kubectl apply -f myvm1. ymlvirtualmachine myvm1 createdAt this point, when VM boots, ignition data will be injected. How does it work under the hood?: We currently leverage Pass-through of arbitrary qemu commands although there is some discussion around using a metadata server instead Summary: Ignition Support brings the ability to run CoreOS/RHCOS distros on KubeVirt and to customize them at boot time. " }, { "id": 123, "url": "/2018/new-volume-types.html", @@ -1189,7 +1189,7 @@

    "title": "Cdi Datavolumes", "author" : "tripledes", "tags" : "cdi, datavolumes", - "body": "CDI DataVolumesContainerized Data Importer (or CDI for short), is a data import service for Kubernetes designed with KubeVirt in mind. Thanks to CDI, we can now enjoy the addition of DataVolumes, which greatly improve the workflow of managing KubeVirt and its storage. What it does: DataVolumes are an abstraction of the Kubernetes resource, PVC (Persistent Volume Claim) and it also leverages other CDI features to ease the process of importing data into a Kubernetes cluster. DataVolumes can be defined by themselves or embedded within a VirtualMachine resource definition, the first method can be used to orchestrate events based on the DataVolume status phases while the second eases the process of providing storage for a VM. How does it work?: In this blog post, I’d like to focus on the second method, embedding the information within a VirtualMachine definition, which might seem like the most immediate benefit of this feature. Let’s get started! Environment description: OpenShift For testing DataVolumes, I’ve spawned a new OpenShift cluster, using dynamic provisioning for storage running OpenShift Cloud Storage (GlusterFS), so the Persistent Volumes (PVs for short) are created on-demand. Other than that, it’s a regular OpenShift cluster, running with a single master (also used for infrastructure components) and two compute nodes. CDI We also need CDI, of course, CDI can be deployed either together with KubeVirt or independently, the instructions can be found in the project’s GitHub repo. KubeVirt Last but not least, we’ll need KubeVirt to run the VMs that will make use of the DataVolumes. Enabling DataVolumes feature: As of this writing, DataVolumes have to be enabled through a feature gate, for KubeVirt, this is achieved by creating the kubevirt-config ConfigMap on the namespace where KubeVirt has been deployed, by default kube-system. Let’s create the ConfigMap with the following definition: ---apiVersion: v1data: feature-gates: DataVolumeskind: ConfigMapmetadata: name: kubevirt-config namespace: kube-system$ oc create -f kubevirt-config-cm. ymlAlternatively, the following one-liner can also be used to achieve the same result: $ oc create configmap kubevirt-config --from-literal feature-gates=DataVolumes -n kube-systemIf the ConfigMap was already present on the system, just use oc edit to add the DataVolumes feature gate under the data field like the YAML above. If everything went as expected, we should see the following log lines on the virt-controller pods: level=info timestamp=2018-10-09T08:16:53. 602400Z pos=application. go:173 component=virt-controller msg= DataVolume integration enabled NOTE: It’s worth noting the values in the ConfigMap are not dynamic, in the sense that virt-controller and virt-api will need to be restarted, scaling their deployments down and back up again, just remember to scale it up to the same number of replicas they previously had. Creating a VirtualMachine embedding a DataVolume: Now that the cluster is ready to use the feature, let’s have a look at our VirtualMachine definition, which includes a DataVolume. apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: labels: kubevirt. io/vm: testvm1 name: testvm1spec: dataVolumeTemplates: - metadata: name: centos7-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi source: http: url: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 running: true template: metadata: labels: kubevirt. io/vm: testvm1 spec: domain: cpu: cores: 1 devices: disks: - volumeName: test-datavolume name: disk0 disk: bus: virtio - name: cloudinitdisk volumeName: cloudinitvolume cdrom: bus: virtio resources: requests: memory: 8Gi volumes: - dataVolume: name: centos7-dv name: test-datavolume - cloudInitNoCloud: userData: | #cloud-config hostname: testvm1 users: - name: kubevirt gecos: KubeVirt Project sudo: ALL=(ALL) NOPASSWD:ALL passwd: $6$JXbc3063IJir. e5h$ypMlYScNMlUtvQ8Il1ldZi/mat7wXTiRioGx6TQmJjTVMandKqr. jJfe99. QckyfH/JJ. OdvLb5/OrCa8ftLr. shell: /bin/bash home: /home/kubevirt lock_passwd: false name: cloudinitvolumeThe new addition to a regular VirtualMachine definition is the dataVolumeTemplates block, which will trigger the import of the CentOS-7 cloud image defined on the url field, storing it on a PV, the resulting DataVolume will be named centos7-dv, being referenced on the volumes section, it will serve as the boot disk (disk0) for our VirtualMachine. Going ahead and applying the above manifest to our cluster results in the following set of events: The DataVolume is created, triggering the creation of a PVC and therefore, using the dynamic provisioning configured on the cluster, a PV is provisioned to satisfy the needs of the PVC. An importer pod is started, this pod is the one actually downloading the image defined in the url field and storing it on the provisioned PV. Once the image has been downloaded and stored, the DataVolume status changes to Succeeded, from that point the virt launcher controller will go ahead and schedule the VirtualMachine. Taking a look to the resources created after applying the VirtualMachine manifest, we can see the following: $ oc get podsNAME READY STATUS RESTARTS AGEimporter-centos7-dv-t9zx2 0/1 Completed 0 11mvirt-launcher-testvm1-cpt8n 1/1 Running 0 8mLet’s look at the importer pod logs to understand what it did: $ oc logs importer-centos7-dv-t9zx2I1009 12:37:45. 384032 1 importer. go:32] Starting importerI1009 12:37:45. 393461 1 importer. go:37] begin import processI1009 12:37:45. 393519 1 dataStream. go:235] copying https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 to /data/disk. img . . . I1009 12:37:45. 393569 1 dataStream. go:112] IMPORTER_ACCESS_KEY_ID and/or IMPORTER_SECRET_KEY are emptyI1009 12:37:45. 393606 1 dataStream. go:298] create the initial Reader based on the endpoint's https schemeI1009 12:37:45. 393665 1 dataStream. go:208] Attempting to get object https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 via http clientI1009 12:37:45. 762330 1 dataStream. go:314] constructReaders: checking compression and archive formats: /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2I1009 12:37:45. 841564 1 dataStream. go:323] found header of type qcow2 I1009 12:37:45. 841618 1 dataStream. go:338] constructReaders: no headers found for file /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 I1009 12:37:45. 841635 1 dataStream. go:340] done processing /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 headersI1009 12:37:45. 841650 1 dataStream. go:138] NewDataStream: endpoint https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 's computed byte size: 8589934592I1009 12:37:45. 841698 1 dataStream. go:566] Validating qcow2 fileI1009 12:37:46. 848736 1 dataStream. go:572] Doing streaming qcow2 to raw conversionI1009 12:40:07. 546308 1 importer. go:43] import completeSo, following the events we see, it fetched the image from the defined url, validated its format and converted it to raw for being used by qemu. $ oc describe dv centos7-dvName: centos7-dvNamespace: test-dvLabels: kubevirt. io/created-by=1916da5f-cbc0-11e8-b467-c81f666533c3Annotations: kubevirt. io/owned-by=virt-controllerAPI Version: cdi. kubevirt. io/v1alpha1Kind: DataVolumeMetadata: Creation Timestamp: 2018-10-09T12:37:34Z Generation: 1 Owner References: API Version: kubevirt. io/v1alpha2 Block Owner Deletion: true Controller: true Kind: VirtualMachine Name: testvm1 UID: 1916da5f-cbc0-11e8-b467-c81f666533c3 Resource Version: 2474310 Self Link: /apis/cdi. kubevirt. io/v1alpha1/namespaces/test-dv/datavolumes/centos7-dv UID: 19186b29-cbc0-11e8-b467-c81f666533c3Spec: Pvc: Access Modes: ReadWriteOnce Resources: Requests: Storage: 10Gi Source: Http: URL: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2Status: Phase: SucceededEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Synced 29s (x13 over 14m) datavolume-controller DataVolume synced successfully Normal Synced 18s datavolume-controller DataVolume synced successfullyThe DataVolume description matches what was defined under dataVolumeTemplates. Now, as we know it uses a PV/PVC underneath, let’s have a look: $ oc describe pvc centos7-dvName: centos7-dvNamespace: test-dvStorageClass: glusterfs-storageStatus: BoundVolume: pvc-191d27c6-cbc0-11e8-b467-c81f666533c3Labels: app=containerized-data-importer cdi-controller=centos7-dvAnnotations: cdi. kubevirt. io/storage. import. endpoint=https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 cdi. kubevirt. io/storage. import. importPodName=importer-centos7-dv-t9zx2 cdi. kubevirt. io/storage. pod. phase=Succeeded pv. kubernetes. io/bind-completed=yes pv. kubernetes. io/bound-by-controller=yes volume. beta. kubernetes. io/storage-provisioner=kubernetes. io/glusterfsFinalizers: [kubernetes. io/pvc-protection]Capacity: 10GiAccess Modes: RWOEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ProvisioningSucceeded 18m persistentvolume-controller Successfully provisioned volume pvc-191d27c6-cbc0-11e8-b467-c81f666533c3 using kubernetes. io/glusterfsIt’s important to pay attention to the annotations, these are monitored/set by CDI. CDI triggers an import when it detects the cdi. kubevirt. io/storage. import. endpoint, assigns a pod as the import task owner and updates the pod phase annotation. At this point, everything is in place, the DataVolume has its underlying components, the image has been imported so now the VirtualMachine can start the VirtualMachineInstance based on its definition and using the CentOS7 image as boot disk, as users we can connect to its console as usual, for instance running the following command: $ virtctl console testvm1Cleaning it up: Once we’re happy with the results, it’s time to clean up all these tests. The task is easy: $ oc delete vm testvm1Once the VM (and its associated VMI) are gone, all the underlying storage resources are removed, there is no trace of the PVC, PV or DataVolume. $ oc get dv centos7-dv$ oc get pvc centos7-dv$ oc get pv pvc-191d27c6-cbc0-11e8-b467-c81f666533c3All three commands returned No resources found. " + "body": "CDI DataVolumesContainerized Data Importer (or CDI for short), is a data import service for Kubernetes designed with KubeVirt in mind. Thanks to CDI, we can now enjoy the addition of DataVolumes, which greatly improve the workflow of managing KubeVirt and its storage. What it does: DataVolumes are an abstraction of the Kubernetes resource, PVC (Persistent Volume Claim) and it also leverages other CDI features to ease the process of importing data into a Kubernetes cluster. DataVolumes can be defined by themselves or embedded within a VirtualMachine resource definition, the first method can be used to orchestrate events based on the DataVolume status phases while the second eases the process of providing storage for a VM. How does it work?: In this blog post, I’d like to focus on the second method, embedding the information within a VirtualMachine definition, which might seem like the most immediate benefit of this feature. Let’s get started! Environment description: OpenShift For testing DataVolumes, I’ve spawned a new OpenShift cluster, using dynamic provisioning for storage running OpenShift Cloud Storage (GlusterFS), so the Persistent Volumes (PVs for short) are created on-demand. Other than that, it’s a regular OpenShift cluster, running with a single master (also used for infrastructure components) and two compute nodes. CDI We also need CDI, of course, CDI can be deployed either together with KubeVirt or independently, the instructions can be found in the project’s GitHub repo. KubeVirt Last but not least, we’ll need KubeVirt to run the VMs that will make use of the DataVolumes. Enabling DataVolumes feature: As of this writing, DataVolumes have to be enabled through a feature gate, for KubeVirt, this is achieved by creating the kubevirt-config ConfigMap on the namespace where KubeVirt has been deployed, by default kube-system. Let’s create the ConfigMap with the following definition: ---apiVersion: v1data: feature-gates: DataVolumeskind: ConfigMapmetadata: name: kubevirt-config namespace: kube-system$ oc create -f kubevirt-config-cm. ymlAlternatively, the following one-liner can also be used to achieve the same result: $ oc create configmap kubevirt-config --from-literal feature-gates=DataVolumes -n kube-systemIf the ConfigMap was already present on the system, just use oc edit to add the DataVolumes feature gate under the data field like the YAML above. If everything went as expected, we should see the following log lines on the virt-controller pods: level=info timestamp=2018-10-09T08:16:53. 602400Z pos=application. go:173 component=virt-controller msg= DataVolume integration enabled NOTE: It’s worth noting the values in the ConfigMap are not dynamic, in the sense that virt-controller and virt-api will need to be restarted, scaling their deployments down and back up again, just remember to scale it up to the same number of replicas they previously had. Creating a VirtualMachine embedding a DataVolume: Now that the cluster is ready to use the feature, let’s have a look at our VirtualMachine definition, which includes a DataVolume. apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: labels: kubevirt. io/vm: testvm1 name: testvm1spec: dataVolumeTemplates: - metadata: name: centos7-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi source: http: url: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 runStrategy: Always template: metadata: labels: kubevirt. io/vm: testvm1 spec: domain: cpu: cores: 1 devices: disks: - volumeName: test-datavolume name: disk0 disk: bus: virtio - name: cloudinitdisk volumeName: cloudinitvolume cdrom: bus: virtio resources: requests: memory: 8Gi volumes: - dataVolume: name: centos7-dv name: test-datavolume - cloudInitNoCloud: userData: | #cloud-config hostname: testvm1 users: - name: kubevirt gecos: KubeVirt Project sudo: ALL=(ALL) NOPASSWD:ALL passwd: $6$JXbc3063IJir. e5h$ypMlYScNMlUtvQ8Il1ldZi/mat7wXTiRioGx6TQmJjTVMandKqr. jJfe99. QckyfH/JJ. OdvLb5/OrCa8ftLr. shell: /bin/bash home: /home/kubevirt lock_passwd: false name: cloudinitvolumeThe new addition to a regular VirtualMachine definition is the dataVolumeTemplates block, which will trigger the import of the CentOS-7 cloud image defined on the url field, storing it on a PV, the resulting DataVolume will be named centos7-dv, being referenced on the volumes section, it will serve as the boot disk (disk0) for our VirtualMachine. Going ahead and applying the above manifest to our cluster results in the following set of events: The DataVolume is created, triggering the creation of a PVC and therefore, using the dynamic provisioning configured on the cluster, a PV is provisioned to satisfy the needs of the PVC. An importer pod is started, this pod is the one actually downloading the image defined in the url field and storing it on the provisioned PV. Once the image has been downloaded and stored, the DataVolume status changes to Succeeded, from that point the virt launcher controller will go ahead and schedule the VirtualMachine. Taking a look to the resources created after applying the VirtualMachine manifest, we can see the following: $ oc get podsNAME READY STATUS RESTARTS AGEimporter-centos7-dv-t9zx2 0/1 Completed 0 11mvirt-launcher-testvm1-cpt8n 1/1 Running 0 8mLet’s look at the importer pod logs to understand what it did: $ oc logs importer-centos7-dv-t9zx2I1009 12:37:45. 384032 1 importer. go:32] Starting importerI1009 12:37:45. 393461 1 importer. go:37] begin import processI1009 12:37:45. 393519 1 dataStream. go:235] copying https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 to /data/disk. img . . . I1009 12:37:45. 393569 1 dataStream. go:112] IMPORTER_ACCESS_KEY_ID and/or IMPORTER_SECRET_KEY are emptyI1009 12:37:45. 393606 1 dataStream. go:298] create the initial Reader based on the endpoint's https schemeI1009 12:37:45. 393665 1 dataStream. go:208] Attempting to get object https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 via http clientI1009 12:37:45. 762330 1 dataStream. go:314] constructReaders: checking compression and archive formats: /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2I1009 12:37:45. 841564 1 dataStream. go:323] found header of type qcow2 I1009 12:37:45. 841618 1 dataStream. go:338] constructReaders: no headers found for file /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 I1009 12:37:45. 841635 1 dataStream. go:340] done processing /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 headersI1009 12:37:45. 841650 1 dataStream. go:138] NewDataStream: endpoint https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 's computed byte size: 8589934592I1009 12:37:45. 841698 1 dataStream. go:566] Validating qcow2 fileI1009 12:37:46. 848736 1 dataStream. go:572] Doing streaming qcow2 to raw conversionI1009 12:40:07. 546308 1 importer. go:43] import completeSo, following the events we see, it fetched the image from the defined url, validated its format and converted it to raw for being used by qemu. $ oc describe dv centos7-dvName: centos7-dvNamespace: test-dvLabels: kubevirt. io/created-by=1916da5f-cbc0-11e8-b467-c81f666533c3Annotations: kubevirt. io/owned-by=virt-controllerAPI Version: cdi. kubevirt. io/v1alpha1Kind: DataVolumeMetadata: Creation Timestamp: 2018-10-09T12:37:34Z Generation: 1 Owner References: API Version: kubevirt. io/v1alpha2 Block Owner Deletion: true Controller: true Kind: VirtualMachine Name: testvm1 UID: 1916da5f-cbc0-11e8-b467-c81f666533c3 Resource Version: 2474310 Self Link: /apis/cdi. kubevirt. io/v1alpha1/namespaces/test-dv/datavolumes/centos7-dv UID: 19186b29-cbc0-11e8-b467-c81f666533c3Spec: Pvc: Access Modes: ReadWriteOnce Resources: Requests: Storage: 10Gi Source: Http: URL: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2Status: Phase: SucceededEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Synced 29s (x13 over 14m) datavolume-controller DataVolume synced successfully Normal Synced 18s datavolume-controller DataVolume synced successfullyThe DataVolume description matches what was defined under dataVolumeTemplates. Now, as we know it uses a PV/PVC underneath, let’s have a look: $ oc describe pvc centos7-dvName: centos7-dvNamespace: test-dvStorageClass: glusterfs-storageStatus: BoundVolume: pvc-191d27c6-cbc0-11e8-b467-c81f666533c3Labels: app=containerized-data-importer cdi-controller=centos7-dvAnnotations: cdi. kubevirt. io/storage. import. endpoint=https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 cdi. kubevirt. io/storage. import. importPodName=importer-centos7-dv-t9zx2 cdi. kubevirt. io/storage. pod. phase=Succeeded pv. kubernetes. io/bind-completed=yes pv. kubernetes. io/bound-by-controller=yes volume. beta. kubernetes. io/storage-provisioner=kubernetes. io/glusterfsFinalizers: [kubernetes. io/pvc-protection]Capacity: 10GiAccess Modes: RWOEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ProvisioningSucceeded 18m persistentvolume-controller Successfully provisioned volume pvc-191d27c6-cbc0-11e8-b467-c81f666533c3 using kubernetes. io/glusterfsIt’s important to pay attention to the annotations, these are monitored/set by CDI. CDI triggers an import when it detects the cdi. kubevirt. io/storage. import. endpoint, assigns a pod as the import task owner and updates the pod phase annotation. At this point, everything is in place, the DataVolume has its underlying components, the image has been imported so now the VirtualMachine can start the VirtualMachineInstance based on its definition and using the CentOS7 image as boot disk, as users we can connect to its console as usual, for instance running the following command: $ virtctl console testvm1Cleaning it up: Once we’re happy with the results, it’s time to clean up all these tests. The task is easy: $ oc delete vm testvm1Once the VM (and its associated VMI) are gone, all the underlying storage resources are removed, there is no trace of the PVC, PV or DataVolume. $ oc get dv centos7-dv$ oc get pvc centos7-dv$ oc get pv pvc-191d27c6-cbc0-11e8-b467-c81f666533c3All three commands returned No resources found. " }, { "id": 126, "url": "/2018/containerized-data-importer.html", @@ -1336,14 +1336,14 @@

    "title": "Kubevirt Objects", "author" : "jcpowermac", "tags" : "custom resources, kubevirt objects, objects, VirtualMachine", - "body": "The KubeVirt project provides extensions to Kubernetes via custom resources. These resources are a collection a API objects that defines a virtual machine within Kubernetes. I think it’s important to point out the two great resources that I used tocompile information for this post: user-guide api-referenceWith that let’s take a look at the objects that are available. KubeVirt top-level objectsBelow is a list of the top level API objects and descriptions that KubeVirt provides. VirtualMachine (vm[s]) - represents a virtual machine in the runtime environment of Kubernetes. OfflineVirtualMachine (ovm[s]) - handles the virtual machines that are not running or are in a stopped state. VirtualMachinePreset (vmpreset[s]) - is an extension to general VirtualMachine configuration behaving much like PodPresets from Kubernetes. When a VirtualMachine is created, any applicable VirtualMachinePresets will be applied to the existing spec for the VirtualMachine. This allows for re-use of common settings that should apply to multiple VirtualMachines. VirtualMachineReplicaSet (vmrs[s]) - tries to ensures that a specified number of VirtualMachine replicas are running at any time. DomainSpec is listed as a top-level object but is only used within all of the objects above. Currently the DomainSpec is a subset of what is configurable via libvirt domain XML. VirtualMachine: VirtualMachine is mortal object just like aPod within Kubernetes. It only runs once and cannot be resurrected. This might seem problematic especiallyto an administrator coming from a traditional virtualization background. Fortunatelylater we will discuss OfflineVirtualMachines which will address this. First let’s use kubectl to retrieve a list of VirtualMachine objects. $ kubectl get vms -n nodejs-exNAME AGEmongodb 5dnodejs 5dWe can also use kubectl describe $ kubectl describe vms -n testName: testvmNamespace: testLabels: guest=testvm kubevirt. io/nodeName=kn2. virtomation. com kubevirt. io/size=small. . . output. . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 59m virtualmachine-controller Created virtual machine pod virt-launcher-testvm-8h927 Normal SuccessfulHandOver 59m virtualmachine-controller Pod owner ship transfered to the node virt-launcher-testvm-8h927 Normal Created 59m (x2 over 59m) virt-handler, kn2. virtomation. com VM defined. Normal Started 59m virt-handler, kn2. virtomation. com VM started. And just in case if you want to return the yaml definition of a VirtualMachine object here is an example. $ kubectl -o yaml get vms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: VirtualMachine. . . output. . . The first object we will annotate is VirtualMachine. The important sections . spec for VirtualMachineSpec and . spec. domain for DomainSpec will be annotated only in this section then referred to in the other object sections. apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec: {}Node Placement: Kubernetes has the ability to schedule a pod to specific nodes based on affinity and anti-affinity rules. Node affinity is also possible with KubeVirt. To constrain a virtual machine to run on a node define a matching expressions using node labels. affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: string operator: string values: - string weight: 0 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: string operator: string values: - stringA virtual machine can also more easily be constrained by using nodeSelector which is defined by node’s label and value. Here is an example nodeSelector: kubernetes. io/hostname: kn1. virtomation. comClocks and Timers: Configures the virtualize hardware clock provided by QEMU. domain: clock: timezone: string utc: offsetSeconds: 0The timer defines the type and policy attribute that determines what action is take when QEMU misses a deadline for injecting a tick to the guest. domain: clock: timer: hpet: present: true tickPolicy: string hyperv: present: true kvm: present: true pit: present: true tickPolicy: string rtc: present: true tickPolicy: string track: stringCPU and Memory: The number of CPU cores a virtual machine will be assigned. . spec. domain. cpu. cores will not be used for scheduling use . spec. domain. resources. requests. cpu instead. cpu: cores: 1There are two supported resource limits and requests: cpu and memory. A . spec. domain. resources. requests. memory should be defined to determine the allocation of memory provided to the virtual machine. These values will be used to in scheduling decisions. resources: limits: {} requests: {}Watchdog Devices: . spec. domain. watchdog automatically triggers an action via Libvirt and QEMU when the virtual machine operating system hangs or crashes. watchdog: i6300esb: action: string name: stringFeatures: . spec. domain. featuresare hypervisor cpu or machine features that can be enabled. After reviewing both Linux and Microsoft QEMU virtual machines managed byLibvirtboth acpi andapicshould be enabled. The hyperv features should be enabled only for Windows-based virtual machines. For additional information regarding features please visit the virtual hardware configuration in the kubevirt user guide. features: acpi: enabled: true apic: enabled: true endOfInterrupt: true hyperv: relaxed: enabled: true reset: enabled: true runtime: enabled: true spinlocks: enabled: true spinlocks: 0 synic: enabled: true synictimer: enabled: true vapic: enabled: true vendorid: enabled: true vendorid: string vpindex: enabled: trueQEMU Machine Type: . spec. domain. machine. type is the emulated machine architecture provided by QEMU. machine: type: stringHere is an example how to retrieve the supported QEMU machine types. $ qemu-system-x86_64 --machine help Supported machines are: . . . output. . . pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2. 10) pc-i440fx-2. 10 Standard PC (i440FX + PIIX, 1996) (default) . . . output. . . q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2. 10) pc-q35-2. 10 Standard PC (Q35 + ICH9, 2009)Disks and Volumes: . spec. domain. devices. disks configures a QEMU type of disk to the virtual machine and assigns a specific volume and its type to that disk via the volumeName. devices: disks: - cdrom: bus: string readonly: true tray: string disk: bus: string readonly: true floppy: readonly: true tray: string lun: bus: string readonly: true name: string volumeName: stringcloudInitNoCloudinjects scripts and configuration into a virtual machine operating system. There are three different parameters that can be used to provide thecloud-init coniguration: secretRef, userData or userDataBase64. See the user-guide for examples of how to use . spec. volumes. cloudInitNoCloud. volumes: - cloudInitNoCloud: secretRef: name: string userData: string userDataBase64: stringAn emptyDisk volume creates an extra qcow2 disk that is created with the virtual machine. It will be removed if the VirtualMachine object is deleted. emptyDisk: capacity: stringEphemeral volume creates a temporary local copy on write image storage that will be discarded when the VirtualMachine is removed. ephemeral: persistentVolumeClaim: claimName: string readOnly: truename: stringpersistentVolumeClaim volume persists after the VirtualMachine is deleted. persistentVolumeClaim: claimName: string readOnly: trueregistryDisk volume type uses a virtual machine disk that is stored in a container image registry. registryDisk: image: string imagePullSecret: stringVirtual Machine Status: Once the VirtualMachine object has been created the VirtualMachineStatus will be available. VirtualMachineStatus can be used in automation tools such as Ansible to confirm running state, determine where a VirtualMachine is running via nodeName or the ipAddress of the virtual machine operating system. kubectl -o yaml get vm mongodb -n nodejs-ex# . . . output. . . status: interfaces: - ipAddress: 10. 244. 2. 7 nodeName: kn2. virtomation. com phase: RunningExample using --template to retrieve the . status. phase of the VirtualMachine. kubectl get vm mongodb --template {{. status. phase}} -n nodejs-exRunningExamples: https://kubevirt. io/user-guide/virtual_machines/virtual_machine_instances/#virtualmachineinstance-apiOfflineVirtualMachine: An OfflineVirtualMachine is an immortal object within KubeVirt. The VirtualMachinedescribed within the spec will be recreated with a start power operation, host issueor simply a accidental deletion of the VirtualMachine object. For a traditional virtual administrator this object might be appropriate formost use-cases. Just like VirtualMachine we can retrieve the OfflineVirtualMachine objects. $ kubectl get ovms -n nodejs-exNAME AGEmongodb 5dnodejs 5dAnd display the object in yaml. $ kubectl -o yaml get ovms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata:. . . output. . . We continue by annotating OfflineVirtualMachine object. apiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec:What is Running in OfflineVirtualMachine?: . spec. running controls whether the associated VirtualMachine object is created. In other words this changes the power status of the virtual machine. running: trueThis will create a VirtualMachine object which will instantiate and power on a virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ running :true }}' -n nodejs-exThis will delete the VirtualMachine object which will power off the virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ running :false }}' -n nodejs-exAnd if you would rather not have to remember the kubectl patch command abovethe KubeVirt team has provided a cli tool virtctl that can start and stopa guest. . /virtctl start mongodb -n nodejs-ex. /virtctl stop mongodb -n nodejs-exOffline Virtual Machine Status: Once the OfflineVirtualMachine object has been created the OfflineVirtualMachineStatus will be available. Like VirtualMachineStatus OfflineVirtualMachineStatus can be used for automation tools such as Ansible. kubectl -o yaml get ovms mongodb -n nodejs-ex# . . . output. . . status: created: true ready: trueExample using --template to retrieve the . status. conditions[0]. type of OfflineVirtualMachine. kubectl get ovm mongodb --template {{. status. ready}} -n nodejs-extrueVirtualMachineReplicaSet: VirtualMachineReplicaSet is great when you want to run multiple identical virtual machines. Just like the other top-level objects we can retrieve VirtualMachineReplicaSet. $ kubectl get vmrs -n nodejs-exNAME AGEreplica 1mWith the replicas parameter set to 2 the command below displays the two VirtualMachine objects that were created. $ kubectl get vms -n nodejs-exNAME AGEreplicanmgjl 7mreplicarjhdz 7mPause rollout: The . spec. paused parameter if true pauses the deployment of the VirtualMachineReplicaSet. paused: trueReplica quantity: The . spec. replicas number of VirtualMachine objects that should be created. replicas: 0The selector must be defined and match labels defined in the template. It is used by the controller to keep track of managed virtual machines. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Virtual Machine Template Spec: The VMTemplateSpec is the definition of a VirtualMachine objects that will be created. In the VirtualMachine section the . spec VirtualMachineSpec describes the available parameters for that object. template: metadata: annotations: {} labels: {} name: string namespace: string spec: {}Replica Status: Like the other objects we already have discussed VMReplicaSetStatus is an important object to use for automation. status: readyReplicas: 0 replicas: 0Example using --template to retrieve the . status. readyReplicas and . status. replicas of VirtualMachineReplicaSet. $ kubectl get vmrs replica --template {{. status. readyReplicas}} -n nodejs-ex2$ kubectl get vmrs replica --template {{. status. replicas}} -n nodejs-ex2Examples: https://kubevirt. io/user-guide/virtual_machines/replicaset/#exampleVirtualMachinePreset: This is used to define a DomainSpec that can be used for multiple virtual machines. To configure a DomainSpec for multiple VirtualMachine objects the selector defines which VirtualMachine the VirtualMachinePreset should be applied to. $ kubectl get vmpreset -n nodejs-exNAME AGEm1. small 17sDomain Spec: See the VirtualMachine section above for annotated details of the DomainSpec object. spec: domain: {}Preset Selector: The selector is optional but if not defined will be applied to all VirtualMachine objects; which is probably not the intended purpose so I recommend always including a selector. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Examples: https://kubevirt. io/user-guide/virtual_machines/presets/#examplesWe provided an annotated view into the KubeVirt objects - VirtualMachine, OfflineVirtualMachine, VirtualMachineReplicaSet and VirtualMachinePreset. Hopefully this will help a user of KubeVirt to understand the options and parameters that are currently available when creating a virtual machine on Kubernetes. " + "body": "The KubeVirt project provides extensions to Kubernetes via custom resources. These resources are a collection a API objects that defines a virtual machine within Kubernetes. I think it’s important to point out the two great resources that I used tocompile information for this post: user-guide api-referenceWith that let’s take a look at the objects that are available. KubeVirt top-level objectsBelow is a list of the top level API objects and descriptions that KubeVirt provides. VirtualMachine (vm[s]) - represents a virtual machine in the runtime environment of Kubernetes. OfflineVirtualMachine (ovm[s]) - handles the virtual machines that are not running or are in a stopped state. VirtualMachinePreset (vmpreset[s]) - is an extension to general VirtualMachine configuration behaving much like PodPresets from Kubernetes. When a VirtualMachine is created, any applicable VirtualMachinePresets will be applied to the existing spec for the VirtualMachine. This allows for re-use of common settings that should apply to multiple VirtualMachines. VirtualMachineReplicaSet (vmrs[s]) - tries to ensures that a specified number of VirtualMachine replicas are running at any time. DomainSpec is listed as a top-level object but is only used within all of the objects above. Currently the DomainSpec is a subset of what is configurable via libvirt domain XML. VirtualMachine: VirtualMachine is mortal object just like aPod within Kubernetes. It only runs once and cannot be resurrected. This might seem problematic especiallyto an administrator coming from a traditional virtualization background. Fortunatelylater we will discuss OfflineVirtualMachines which will address this. First let’s use kubectl to retrieve a list of VirtualMachine objects. $ kubectl get vms -n nodejs-exNAME AGEmongodb 5dnodejs 5dWe can also use kubectl describe $ kubectl describe vms -n testName: testvmNamespace: testLabels: guest=testvm kubevirt. io/nodeName=kn2. virtomation. com kubevirt. io/size=small. . . output. . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 59m virtualmachine-controller Created virtual machine pod virt-launcher-testvm-8h927 Normal SuccessfulHandOver 59m virtualmachine-controller Pod owner ship transfered to the node virt-launcher-testvm-8h927 Normal Created 59m (x2 over 59m) virt-handler, kn2. virtomation. com VM defined. Normal Started 59m virt-handler, kn2. virtomation. com VM started. And just in case if you want to return the yaml definition of a VirtualMachine object here is an example. $ kubectl -o yaml get vms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: VirtualMachine. . . output. . . The first object we will annotate is VirtualMachine. The important sections . spec for VirtualMachineSpec and . spec. domain for DomainSpec will be annotated only in this section then referred to in the other object sections. apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec: {}Node Placement: Kubernetes has the ability to schedule a pod to specific nodes based on affinity and anti-affinity rules. Node affinity is also possible with KubeVirt. To constrain a virtual machine to run on a node define a matching expressions using node labels. affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: string operator: string values: - string weight: 0 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: string operator: string values: - stringA virtual machine can also more easily be constrained by using nodeSelector which is defined by node’s label and value. Here is an example nodeSelector: kubernetes. io/hostname: kn1. virtomation. comClocks and Timers: Configures the virtualize hardware clock provided by QEMU. domain: clock: timezone: string utc: offsetSeconds: 0The timer defines the type and policy attribute that determines what action is take when QEMU misses a deadline for injecting a tick to the guest. domain: clock: timer: hpet: present: true tickPolicy: string hyperv: present: true kvm: present: true pit: present: true tickPolicy: string rtc: present: true tickPolicy: string track: stringCPU and Memory: The number of CPU cores a virtual machine will be assigned. . spec. domain. cpu. cores will not be used for scheduling use . spec. domain. resources. requests. cpu instead. cpu: cores: 1There are two supported resource limits and requests: cpu and memory. A . spec. domain. resources. requests. memory should be defined to determine the allocation of memory provided to the virtual machine. These values will be used to in scheduling decisions. resources: limits: {} requests: {}Watchdog Devices: . spec. domain. watchdog automatically triggers an action via Libvirt and QEMU when the virtual machine operating system hangs or crashes. watchdog: i6300esb: action: string name: stringFeatures: . spec. domain. featuresare hypervisor cpu or machine features that can be enabled. After reviewing both Linux and Microsoft QEMU virtual machines managed byLibvirtboth acpi andapicshould be enabled. The hyperv features should be enabled only for Windows-based virtual machines. For additional information regarding features please visit the virtual hardware configuration in the kubevirt user guide. features: acpi: enabled: true apic: enabled: true endOfInterrupt: true hyperv: relaxed: enabled: true reset: enabled: true runtime: enabled: true spinlocks: enabled: true spinlocks: 0 synic: enabled: true synictimer: enabled: true vapic: enabled: true vendorid: enabled: true vendorid: string vpindex: enabled: trueQEMU Machine Type: . spec. domain. machine. type is the emulated machine architecture provided by QEMU. machine: type: stringHere is an example how to retrieve the supported QEMU machine types. $ qemu-system-x86_64 --machine help Supported machines are: . . . output. . . pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2. 10) pc-i440fx-2. 10 Standard PC (i440FX + PIIX, 1996) (default) . . . output. . . q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2. 10) pc-q35-2. 10 Standard PC (Q35 + ICH9, 2009)Disks and Volumes: . spec. domain. devices. disks configures a QEMU type of disk to the virtual machine and assigns a specific volume and its type to that disk via the volumeName. devices: disks: - cdrom: bus: string readonly: true tray: string disk: bus: string readonly: true floppy: readonly: true tray: string lun: bus: string readonly: true name: string volumeName: stringcloudInitNoCloudinjects scripts and configuration into a virtual machine operating system. There are three different parameters that can be used to provide thecloud-init coniguration: secretRef, userData or userDataBase64. See the user-guide for examples of how to use . spec. volumes. cloudInitNoCloud. volumes: - cloudInitNoCloud: secretRef: name: string userData: string userDataBase64: stringAn emptyDisk volume creates an extra qcow2 disk that is created with the virtual machine. It will be removed if the VirtualMachine object is deleted. emptyDisk: capacity: stringEphemeral volume creates a temporary local copy on write image storage that will be discarded when the VirtualMachine is removed. ephemeral: persistentVolumeClaim: claimName: string readOnly: truename: stringpersistentVolumeClaim volume persists after the VirtualMachine is deleted. persistentVolumeClaim: claimName: string readOnly: trueregistryDisk volume type uses a virtual machine disk that is stored in a container image registry. registryDisk: image: string imagePullSecret: stringVirtual Machine Status: Once the VirtualMachine object has been created the VirtualMachineStatus will be available. VirtualMachineStatus can be used in automation tools such as Ansible to confirm running state, determine where a VirtualMachine is running via nodeName or the ipAddress of the virtual machine operating system. kubectl -o yaml get vm mongodb -n nodejs-ex# . . . output. . . status: interfaces: - ipAddress: 10. 244. 2. 7 nodeName: kn2. virtomation. com phase: RunningExample using --template to retrieve the . status. phase of the VirtualMachine. kubectl get vm mongodb --template {{. status. phase}} -n nodejs-exRunningExamples: https://kubevirt. io/user-guide/virtual_machines/virtual_machine_instances/#virtualmachineinstance-apiOfflineVirtualMachine: An OfflineVirtualMachine is an immortal object within KubeVirt. The VirtualMachinedescribed within the spec will be recreated with a start power operation, host issueor simply a accidental deletion of the VirtualMachine object. For a traditional virtual administrator this object might be appropriate formost use-cases. Just like VirtualMachine we can retrieve the OfflineVirtualMachine objects. $ kubectl get ovms -n nodejs-exNAME AGEmongodb 5dnodejs 5dAnd display the object in yaml. $ kubectl -o yaml get ovms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata:. . . output. . . We continue by annotating OfflineVirtualMachine object. apiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec:What is Running in OfflineVirtualMachine?: . spec. runStrategy controls whether and when the associated VirtualMachineInstance object is created. In other words this controls the power status of the virtual machine. runStrategy: AlwaysThis will create a VirtualMachineInstance object which will instantiate and power on a virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ runStrategy : Always }}' -n nodejs-exThis will delete the VirtualMachineInstance object which will power off the virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ runStrategy : Halted }}' -n nodejs-exAnd if you would rather not have to remember the kubectl patch command abovethe KubeVirt team has provided a cli tool virtctl that can start and stopa guest. . /virtctl start mongodb -n nodejs-ex. /virtctl stop mongodb -n nodejs-exOffline Virtual Machine Status: Once the OfflineVirtualMachine object has been created the OfflineVirtualMachineStatus will be available. Like VirtualMachineStatus OfflineVirtualMachineStatus can be used for automation tools such as Ansible. kubectl -o yaml get ovms mongodb -n nodejs-ex# . . . output. . . status: created: true ready: trueExample using --template to retrieve the . status. conditions[0]. type of OfflineVirtualMachine. kubectl get ovm mongodb --template {{. status. ready}} -n nodejs-extrueVirtualMachineReplicaSet: VirtualMachineReplicaSet is great when you want to run multiple identical virtual machines. Just like the other top-level objects we can retrieve VirtualMachineReplicaSet. $ kubectl get vmrs -n nodejs-exNAME AGEreplica 1mWith the replicas parameter set to 2 the command below displays the two VirtualMachine objects that were created. $ kubectl get vms -n nodejs-exNAME AGEreplicanmgjl 7mreplicarjhdz 7mPause rollout: The . spec. paused parameter if true pauses the deployment of the VirtualMachineReplicaSet. paused: trueReplica quantity: The . spec. replicas number of VirtualMachine objects that should be created. replicas: 0The selector must be defined and match labels defined in the template. It is used by the controller to keep track of managed virtual machines. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Virtual Machine Template Spec: The VMTemplateSpec is the definition of a VirtualMachine objects that will be created. In the VirtualMachine section the . spec VirtualMachineSpec describes the available parameters for that object. template: metadata: annotations: {} labels: {} name: string namespace: string spec: {}Replica Status: Like the other objects we already have discussed VMReplicaSetStatus is an important object to use for automation. status: readyReplicas: 0 replicas: 0Example using --template to retrieve the . status. readyReplicas and . status. replicas of VirtualMachineReplicaSet. $ kubectl get vmrs replica --template {{. status. readyReplicas}} -n nodejs-ex2$ kubectl get vmrs replica --template {{. status. replicas}} -n nodejs-ex2Examples: https://kubevirt. io/user-guide/virtual_machines/replicaset/#exampleVirtualMachinePreset: This is used to define a DomainSpec that can be used for multiple virtual machines. To configure a DomainSpec for multiple VirtualMachine objects the selector defines which VirtualMachine the VirtualMachinePreset should be applied to. $ kubectl get vmpreset -n nodejs-exNAME AGEm1. small 17sDomain Spec: See the VirtualMachine section above for annotated details of the DomainSpec object. spec: domain: {}Preset Selector: The selector is optional but if not defined will be applied to all VirtualMachine objects; which is probably not the intended purpose so I recommend always including a selector. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Examples: https://kubevirt. io/user-guide/virtual_machines/presets/#examplesWe provided an annotated view into the KubeVirt objects - VirtualMachine, OfflineVirtualMachine, VirtualMachineReplicaSet and VirtualMachinePreset. Hopefully this will help a user of KubeVirt to understand the options and parameters that are currently available when creating a virtual machine on Kubernetes. " }, { "id": 147, "url": "/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html", "title": "Deploying Vms On Kubernetes Glusterfs Kubevirt", "author" : "rwsu", "tags" : "glusterfs, heketi, virtual machine, weavenet", - "body": "Kubernetes is traditionally used to deploy and manage containerized applications. Did you know Kubernetes can also be used to deploy and manage virtual machines? This guide will walk you through installing a Kubernetes environment backed by GlusterFS for storage and the KubeVirt add-on to enable deployment and management of VMs. Contents: Prerequisites Known Issues Installing Kubernetes Installing GlusterFS and Heketi using gk-deploy Installing KubeVirt Deploying Virtual MachinesPrerequisites: You should have access to at least three baremetal servers. One server will be the master Kubernetes node and other two servers will be the worker nodes. Each server should have a block device attached for GlusterFS, this is in addition to the ones used by the OS. You may use virtual machines in lieu of baremetal servers. Performance may suffer and you will need to ensure your hardware supports nested virtualization and that the relevant kernel modules are loaded in the OS. For reference, I used the following components and versions: baremetal servers with CentOS version 7. 4 as the base OS latest version of Kubernetes (at the time v1. 10. 1) Weave Net as the Container Network Interface (CNI), v2. 3. 0 gluster-kubernetes master commit 2a2a68ce5739524802a38f3871c545e4f57fa20a KubeVirt v0. 4. 1. Known Issues: You may need to set SELinux to permissive mode prior to running “kubeadm init” if you see failures attributed to etcd in /var/log/audit. log. Prior to installing GlusterFS, you may need to disable firewalld until this issue is resolved: https://github. com/gluster/gluster-kubernetes/issues/471 kubevirt-ansible install may fail in storage-glusterfs role: https://github. com/kubevirt/kubevirt-ansible/issues/219Installing Kubernetes: Create the Kubernetes cluster by using kubeadm. Detailed instructions can be found at https://kubernetes. io/docs/setup/independent/install-kubeadm/. Use Weave Net as the CNI. Other CNIs may work, but I have only tested Weave Net. If you are using only 2 servers as workers, then you will need to allow scheduling of pods on the master node because GlusterFS requires at least three nodes. To schedule pods on the master node, see “Master Isolation” in the kubeadm guide or execute this command: kubectl taint nodes --all node-role. kubernetes. io/master-Move onto the next step when your master and worker nodes are Ready. [root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster. somewhere. com Ready master 6d v1. 10. 1worker1. somewhere. com Ready <none> 6d v1. 10. 1worker2. somewhere. com Ready <none> 6d v1. 10. 1And all of the pods in the kube-system namespace are Running. [root@master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEetcd-master. somewhere. com 1/1 Running 0 6dkube-apiserver-master. somewhere. com 1/1 Running 0 6dkube-controller-manager-master. somewhere. com 1/1 Running 0 6dkube-dns-86f4d74b45-glv4k 3/3 Running 0 6dkube-proxy-b6ksg 1/1 Running 0 6dkube-proxy-jjxs5 1/1 Running 0 6dkube-proxy-kw77k 1/1 Running 0 6dkube-scheduler-master. somewhere. com 1/1 Running 0 6dweave-net-ldlh7 2/2 Running 0 6dweave-net-pmhlx 2/2 Running 1 6dweave-net-s4dp6 2/2 Running 0 6dInstalling GlusterFS and Heketi using gluster-kubernetes: The next step is to deploy GlusterFS and Heketi onto Kubernetes. GlusterFS provides the storage system on which the virtual machine images are stored. Heketi provides the REST API that Kubernetes uses to provision GlusterFS volumes. The gk-deploy tool is used to deploy both of these components as pods in the Kubernetes cluster. There is a detailed setup guide for gk-deploy. Note each node must have a raw block device that is reserved for use by heketi and they must not contain any data or be pre-formatted. You can reset your block device to a useable state by running: wipefs -a <path to device>To aid you, below are the commands you will need to run if you are following the setup guide. On all nodes: # Open ports for GlusterFS communicationssudo iptables -I INPUT 1 -p tcp --dport 2222 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24007 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24008 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 49152:49251 -j ACCEPT# Load kernel modulessudo modprobe dm_snapshotsudo modprobe dm_thin_poolsudo modprobe dm_mirror# Install glusterfs-fuse and git packagessudo yum install -y glusterfs-fuse gitOn the master node: # checkout gluster-kubernetes repogit clone https://github. com/gluster/gluster-kubernetescd gluster-kubernetes/deployBefore running the gk-deploy script, we need to first create a topology. json file that maps the nodes present in the GlusterFS cluster and the block devices attached to each node. The block devices should be raw and unformatted. Below is a sample topology. json file for a 3 node cluster all operating in the same zone. The gluster-kubernetes/deploy directory also contains a sample topology. json file. # topology. json{ clusters : [ { nodes : [ { node : { hostnames : { manage : [ master. somewhere. com ], storage : [ 192. 168. 10. 100 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker1. somewhere. com ], storage : [ 192. 168. 10. 101 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker2. somewhere. com ], storage : [ 192. 168. 10. 102 ] }, zone : 1 }, devices : [ /dev/vdb ] } ] } ]}Under “hostnames”, the node’s hostname is listed under “manage” and its IP address is listed under “storage”. Multiple block devices can be listed under “devices”. If you are using VMs, the second block device attached to the VM will usually be /dev/vdb. For multi-path, the device path will usually be /dev/mapper/mpatha. If you are using a second disk drive, the device path will usually be /dev/sdb. Once you have your topology. json file and saved it in gluster-kubernetes/deploy, we can execute gk-deploy to create the GlusterFS and Heketi pods. You will need to specify an admin-key which will be used in the next step and will be discovered during the KubeVirt installation. # from gluster-kubernetes/deploy. /gk-deploy -g -v -n kube-system --admin-key my-admin-keyAdd the end of the installation, you will see: heketi is now running and accessible via http://10. 32. 0. 4:8080 . To runadministrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://10. 32. 0. 4:8080 --user admin --secret '<ADMIN_KEY>' cluster listYou can find it at https://github. com/heketi/heketi/releases . Alternatively,use it from within the heketi pod: # /usr/bin/kubectl -n kube-system exec -i heketi-b96c7c978-dcwlw -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster listFor dynamic provisioning, create a StorageClass similar to this:\Take note of the URL for Heketi which will be used next step. If successful, 4 additional pods will be shown as Running in the kube-system namespace. [root@master deploy]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGE. . . snip. . . glusterfs-h4nwf 1/1 Running 0 6dglusterfs-kfvjk 1/1 Running 0 6dglusterfs-tjm2f 1/1 Running 0 6dheketi-b96c7c978-dcwlw 1/1 Running 0 6d. . . snip. . . Installing KubeVirt and setting up storage: The final component to install and which will enable us to deploy VMs on Kubernetes is KubeVirt. We will use kubevirt-ansible to deploy KubeVirt which will also help us configure a Secret and a StorageClass that will allow us to provision Persistent Volume Claims (PVCs) on GlusterFS. Let’s first clone the kubevirt-ansible repo. git clone https://github. com/kubevirt/kubevirt-ansiblecd kubevirt-ansibleEdit the inventory file in the kubevirt-ansible checkout. Modify the section that starts with “#BEGIN CUSTOM SETTINGS”. As an example using the servers from above: # BEGIN CUSTOM SETTINGS[masters]# Your master FQDNmaster. somewhere. com[etcd]# Your etcd FQDNmaster. somewhere. com[nodes]# Your nodes FQDN'sworker1. somewhere. comworker2. somewhere. com[nfs]# Your nfs server FQDN[glusterfs]# Your glusterfs nodes FQDN# Each node should have the glusterfs_devices variable, which# points to the block device that will be used by gluster. master. somewhere. comworker1. somewhere. comworker1. somewhere. com## If you run openshift deployment# You can add your master as schedulable node with option openshift_schedulable=true# Add at least one node with lable to run on it router and docker containers# openshift_node_labels= {'region': 'infra','zone': 'default'} # END CUSTOM SETTINGSNow let’s run the kubevirt. yml playbook: ansible-playbook -i inventory playbooks/kubevirt. yml -e cluster=k8s -e storage_role=storage-glusterfs -e namespace=kube-system -e glusterfs_namespace=kube-system -e glusterfs_name= -e heketi_url=http://10. 32. 0. 4:8080 -vIf successful, we should see 7 additional pods as Running in the kube-system namespace. [root@master kubevirt-ansible]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEvirt-api-785fd6b4c7-rdknl 1/1 Running 0 6dvirt-api-785fd6b4c7-rfbqv 1/1 Running 0 6dvirt-controller-844469fd89-c5vrc 1/1 Running 0 6dvirt-controller-844469fd89-vtjct 0/1 Running 0 6dvirt-handler-78wsb 1/1 Running 0 6dvirt-handler-csqbl 1/1 Running 0 6dvirt-handler-hnlqn 1/1 Running 0 6dDeploying Virtual Machines: To deploy a VM, we must first grab a VM image in raw format, place the image into a PVC, define the VM in a yaml file, source the VM definition into Kubernetes, and then start the VM. The containerized data importer (CDI) is usually used to import VM images into Kubernetes, but there are some patches and additional testing to be done before the CDI can work smoothly with GlusterFS. For now, we will be placing the image into the PVC using a Pod that curls the image from the local filesystem using httpd. On master or on a node where kubectl is configured correctly install and start httpd. sudo yum install -y httpdsudo systemctl start httpdDownload the cirros cloud image and convert it into raw format. curl http://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img -o /var/www/html/cirros-0. 4. 0-x86_64-disk. imgsudo yum install -y qemu-imgqemu-img convert /var/www/html/cirros-0. 4. 0-x86_64-disk. img /var/www/html/cirros-0. 4. 0-x86_64-disk. rawCreate the PVC to store the cirros image. cat <<EOF | kubectl create -f -apiVersion: v1kind: PersistentVolumeClaimmetadata: name: gluster-pvc-cirros annotations: volume. beta. kubernetes. io/storage-class: kubevirtspec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiEOFCheck the PVC was created and has “Bound” status. [root@master ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEgluster-pvc-cirros Bound pvc-843bd508-4dbf-11e8-9e4e-149ecfc53021 5Gi RWO kubevirt 2mCreate a Pod to curl the cirros image into the PVC. Note: You will need to substitute with actual hostname or IP address. cat <<EOF | kubectl create -f -apiVersion: v1kind: Podmetadata: name: image-importer-cirrosspec: restartPolicy: OnFailure containers: - name: image-importer-cirros image: kubevirtci/disk-importer env: - name: CURL_OPTS value: -L - name: INSTALL_TO value: /storage/disk. img - name: URL value: http://<hostname>/cirros-0. 4. 0-x86_64-disk. raw volumeMounts: - name: storage mountPath: /storage volumes: - name: storage persistentVolumeClaim: claimName: gluster-pvc-cirrosEOFCheck and wait for the image-importer-cirros Pod to complete. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28sCreate a Virtual Machine definition for your VM and source it into Kubernetes. Note the PVC containing the cirros image must be listed as the first disk under spec. domain. devices. disks. cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros name: cirrosspec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros spec: domain: devices: disks: - disk: bus: virtio name: pvcdisk volumeName: cirros-pvc - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - cloudInitNoCloud: userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK name: cloudinitvolume - name: cirros-pvc persistentVolumeClaim: claimName: gluster-pvc-cirrosstatus: {}Finally start the VM. export VERSION=v0. 4. 1curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64chmod +x virtctl. /virtctl start cirrosWait for the VM pod to be in “Running” status. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28svirt-launcher-cirros-krvv2 0/1 Running 0 13sOnce it is running, we can then connect to its console. . /virtctl console cirrosPress enter if a login prompt doesn’t appear. " + "body": "Kubernetes is traditionally used to deploy and manage containerized applications. Did you know Kubernetes can also be used to deploy and manage virtual machines? This guide will walk you through installing a Kubernetes environment backed by GlusterFS for storage and the KubeVirt add-on to enable deployment and management of VMs. Contents: Prerequisites Known Issues Installing Kubernetes Installing GlusterFS and Heketi using gk-deploy Installing KubeVirt Deploying Virtual MachinesPrerequisites: You should have access to at least three baremetal servers. One server will be the master Kubernetes node and other two servers will be the worker nodes. Each server should have a block device attached for GlusterFS, this is in addition to the ones used by the OS. You may use virtual machines in lieu of baremetal servers. Performance may suffer and you will need to ensure your hardware supports nested virtualization and that the relevant kernel modules are loaded in the OS. For reference, I used the following components and versions: baremetal servers with CentOS version 7. 4 as the base OS latest version of Kubernetes (at the time v1. 10. 1) Weave Net as the Container Network Interface (CNI), v2. 3. 0 gluster-kubernetes master commit 2a2a68ce5739524802a38f3871c545e4f57fa20a KubeVirt v0. 4. 1. Known Issues: You may need to set SELinux to permissive mode prior to running “kubeadm init” if you see failures attributed to etcd in /var/log/audit. log. Prior to installing GlusterFS, you may need to disable firewalld until this issue is resolved: https://github. com/gluster/gluster-kubernetes/issues/471 kubevirt-ansible install may fail in storage-glusterfs role: https://github. com/kubevirt/kubevirt-ansible/issues/219Installing Kubernetes: Create the Kubernetes cluster by using kubeadm. Detailed instructions can be found at https://kubernetes. io/docs/setup/independent/install-kubeadm/. Use Weave Net as the CNI. Other CNIs may work, but I have only tested Weave Net. If you are using only 2 servers as workers, then you will need to allow scheduling of pods on the master node because GlusterFS requires at least three nodes. To schedule pods on the master node, see “Master Isolation” in the kubeadm guide or execute this command: kubectl taint nodes --all node-role. kubernetes. io/master-Move onto the next step when your master and worker nodes are Ready. [root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster. somewhere. com Ready master 6d v1. 10. 1worker1. somewhere. com Ready <none> 6d v1. 10. 1worker2. somewhere. com Ready <none> 6d v1. 10. 1And all of the pods in the kube-system namespace are Running. [root@master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEetcd-master. somewhere. com 1/1 Running 0 6dkube-apiserver-master. somewhere. com 1/1 Running 0 6dkube-controller-manager-master. somewhere. com 1/1 Running 0 6dkube-dns-86f4d74b45-glv4k 3/3 Running 0 6dkube-proxy-b6ksg 1/1 Running 0 6dkube-proxy-jjxs5 1/1 Running 0 6dkube-proxy-kw77k 1/1 Running 0 6dkube-scheduler-master. somewhere. com 1/1 Running 0 6dweave-net-ldlh7 2/2 Running 0 6dweave-net-pmhlx 2/2 Running 1 6dweave-net-s4dp6 2/2 Running 0 6dInstalling GlusterFS and Heketi using gluster-kubernetes: The next step is to deploy GlusterFS and Heketi onto Kubernetes. GlusterFS provides the storage system on which the virtual machine images are stored. Heketi provides the REST API that Kubernetes uses to provision GlusterFS volumes. The gk-deploy tool is used to deploy both of these components as pods in the Kubernetes cluster. There is a detailed setup guide for gk-deploy. Note each node must have a raw block device that is reserved for use by heketi and they must not contain any data or be pre-formatted. You can reset your block device to a useable state by running: wipefs -a <path to device>To aid you, below are the commands you will need to run if you are following the setup guide. On all nodes: # Open ports for GlusterFS communicationssudo iptables -I INPUT 1 -p tcp --dport 2222 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24007 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24008 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 49152:49251 -j ACCEPT# Load kernel modulessudo modprobe dm_snapshotsudo modprobe dm_thin_poolsudo modprobe dm_mirror# Install glusterfs-fuse and git packagessudo yum install -y glusterfs-fuse gitOn the master node: # checkout gluster-kubernetes repogit clone https://github. com/gluster/gluster-kubernetescd gluster-kubernetes/deployBefore running the gk-deploy script, we need to first create a topology. json file that maps the nodes present in the GlusterFS cluster and the block devices attached to each node. The block devices should be raw and unformatted. Below is a sample topology. json file for a 3 node cluster all operating in the same zone. The gluster-kubernetes/deploy directory also contains a sample topology. json file. # topology. json{ clusters : [ { nodes : [ { node : { hostnames : { manage : [ master. somewhere. com ], storage : [ 192. 168. 10. 100 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker1. somewhere. com ], storage : [ 192. 168. 10. 101 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker2. somewhere. com ], storage : [ 192. 168. 10. 102 ] }, zone : 1 }, devices : [ /dev/vdb ] } ] } ]}Under “hostnames”, the node’s hostname is listed under “manage” and its IP address is listed under “storage”. Multiple block devices can be listed under “devices”. If you are using VMs, the second block device attached to the VM will usually be /dev/vdb. For multi-path, the device path will usually be /dev/mapper/mpatha. If you are using a second disk drive, the device path will usually be /dev/sdb. Once you have your topology. json file and saved it in gluster-kubernetes/deploy, we can execute gk-deploy to create the GlusterFS and Heketi pods. You will need to specify an admin-key which will be used in the next step and will be discovered during the KubeVirt installation. # from gluster-kubernetes/deploy. /gk-deploy -g -v -n kube-system --admin-key my-admin-keyAdd the end of the installation, you will see: heketi is now running and accessible via http://10. 32. 0. 4:8080 . To runadministrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://10. 32. 0. 4:8080 --user admin --secret '<ADMIN_KEY>' cluster listYou can find it at https://github. com/heketi/heketi/releases . Alternatively,use it from within the heketi pod: # /usr/bin/kubectl -n kube-system exec -i heketi-b96c7c978-dcwlw -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster listFor dynamic provisioning, create a StorageClass similar to this:\Take note of the URL for Heketi which will be used next step. If successful, 4 additional pods will be shown as Running in the kube-system namespace. [root@master deploy]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGE. . . snip. . . glusterfs-h4nwf 1/1 Running 0 6dglusterfs-kfvjk 1/1 Running 0 6dglusterfs-tjm2f 1/1 Running 0 6dheketi-b96c7c978-dcwlw 1/1 Running 0 6d. . . snip. . . Installing KubeVirt and setting up storage: The final component to install and which will enable us to deploy VMs on Kubernetes is KubeVirt. We will use kubevirt-ansible to deploy KubeVirt which will also help us configure a Secret and a StorageClass that will allow us to provision Persistent Volume Claims (PVCs) on GlusterFS. Let’s first clone the kubevirt-ansible repo. git clone https://github. com/kubevirt/kubevirt-ansiblecd kubevirt-ansibleEdit the inventory file in the kubevirt-ansible checkout. Modify the section that starts with “#BEGIN CUSTOM SETTINGS”. As an example using the servers from above: # BEGIN CUSTOM SETTINGS[masters]# Your master FQDNmaster. somewhere. com[etcd]# Your etcd FQDNmaster. somewhere. com[nodes]# Your nodes FQDN'sworker1. somewhere. comworker2. somewhere. com[nfs]# Your nfs server FQDN[glusterfs]# Your glusterfs nodes FQDN# Each node should have the glusterfs_devices variable, which# points to the block device that will be used by gluster. master. somewhere. comworker1. somewhere. comworker1. somewhere. com## If you run openshift deployment# You can add your master as schedulable node with option openshift_schedulable=true# Add at least one node with lable to run on it router and docker containers# openshift_node_labels= {'region': 'infra','zone': 'default'} # END CUSTOM SETTINGSNow let’s run the kubevirt. yml playbook: ansible-playbook -i inventory playbooks/kubevirt. yml -e cluster=k8s -e storage_role=storage-glusterfs -e namespace=kube-system -e glusterfs_namespace=kube-system -e glusterfs_name= -e heketi_url=http://10. 32. 0. 4:8080 -vIf successful, we should see 7 additional pods as Running in the kube-system namespace. [root@master kubevirt-ansible]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEvirt-api-785fd6b4c7-rdknl 1/1 Running 0 6dvirt-api-785fd6b4c7-rfbqv 1/1 Running 0 6dvirt-controller-844469fd89-c5vrc 1/1 Running 0 6dvirt-controller-844469fd89-vtjct 0/1 Running 0 6dvirt-handler-78wsb 1/1 Running 0 6dvirt-handler-csqbl 1/1 Running 0 6dvirt-handler-hnlqn 1/1 Running 0 6dDeploying Virtual Machines: To deploy a VM, we must first grab a VM image in raw format, place the image into a PVC, define the VM in a yaml file, source the VM definition into Kubernetes, and then start the VM. The containerized data importer (CDI) is usually used to import VM images into Kubernetes, but there are some patches and additional testing to be done before the CDI can work smoothly with GlusterFS. For now, we will be placing the image into the PVC using a Pod that curls the image from the local filesystem using httpd. On master or on a node where kubectl is configured correctly install and start httpd. sudo yum install -y httpdsudo systemctl start httpdDownload the cirros cloud image and convert it into raw format. curl http://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img -o /var/www/html/cirros-0. 4. 0-x86_64-disk. imgsudo yum install -y qemu-imgqemu-img convert /var/www/html/cirros-0. 4. 0-x86_64-disk. img /var/www/html/cirros-0. 4. 0-x86_64-disk. rawCreate the PVC to store the cirros image. cat <<EOF | kubectl create -f -apiVersion: v1kind: PersistentVolumeClaimmetadata: name: gluster-pvc-cirros annotations: volume. beta. kubernetes. io/storage-class: kubevirtspec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiEOFCheck the PVC was created and has “Bound” status. [root@master ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEgluster-pvc-cirros Bound pvc-843bd508-4dbf-11e8-9e4e-149ecfc53021 5Gi RWO kubevirt 2mCreate a Pod to curl the cirros image into the PVC. Note: You will need to substitute with actual hostname or IP address. cat <<EOF | kubectl create -f -apiVersion: v1kind: Podmetadata: name: image-importer-cirrosspec: restartPolicy: OnFailure containers: - name: image-importer-cirros image: kubevirtci/disk-importer env: - name: CURL_OPTS value: -L - name: INSTALL_TO value: /storage/disk. img - name: URL value: http://<hostname>/cirros-0. 4. 0-x86_64-disk. raw volumeMounts: - name: storage mountPath: /storage volumes: - name: storage persistentVolumeClaim: claimName: gluster-pvc-cirrosEOFCheck and wait for the image-importer-cirros Pod to complete. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28sCreate a Virtual Machine definition for your VM and source it into Kubernetes. Note the PVC containing the cirros image must be listed as the first disk under spec. domain. devices. disks. cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros name: cirrosspec: runStrategy: Halted template: metadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros spec: domain: devices: disks: - disk: bus: virtio name: pvcdisk volumeName: cirros-pvc - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - cloudInitNoCloud: userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK name: cloudinitvolume - name: cirros-pvc persistentVolumeClaim: claimName: gluster-pvc-cirrosstatus: {}Finally start the VM. export VERSION=v0. 4. 1curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64chmod +x virtctl. /virtctl start cirrosWait for the VM pod to be in “Running” status. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28svirt-launcher-cirros-krvv2 0/1 Running 0 13sOnce it is running, we can then connect to its console. . /virtctl console cirrosPress enter if a login prompt doesn’t appear. " }, { "id": 148, "url": "/2018/changelog-v0.5.0.html", @@ -1700,7 +1700,7 @@

    "title": "Use KubeVirt", "author" : "", "tags" : "laboratory, kubevirt installation, start vm, stop vm, delete vm, access console, lab", - "body": " - Use KubeVirt You can experiment with this lab online at KillercodaCreate a Virtual Machine: Download the VM manifest and explore it. Note it uses a container disk and as such doesn’t persist data. Such container disks currently exist for alpine, cirros and fedora. wget https://kubevirt. io/labs/manifests/vm. yamlless vm. yamlApply the manifest to Kubernetes. kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlYou should see following results virtualmachine. kubevirt. io “testvm” createdvirtualmachineinstancepreset. kubevirt. io “small” created Manage Virtual Machines (optional):: To get a list of existing Virtual Machines. Note the running status. kubectl get vmskubectl get vms -o yaml testvmTo start a Virtual Machine you can use: virtctl start testvmIf you installed virtctl via krew, you can use kubectl virt: # Start the virtual machine:kubectl virt start testvm# Stop the virtual machine:kubectl virt stop testvmAlternatively you could use kubectl patch: # Start the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :true}}'# Stop the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :false}}'Now that the Virtual Machine has been started, check the status (kubectl get vms). Note the Running status. You now want to see the instance of the vm you just started : kubectl get vmiskubectl get vmis -o yaml testvmNote the difference between VM (virtual machine) resource and VMI (virtual machine instance) resource. The VMI does not exist before starting the VM and the VMI will be deleted when you stop the VM. (Also note that restart of the VM is needed if you like to change some properties. Just modifying VM is not sufficient, the VMI has to be replaced. ) Accessing VMs (serial console): Connect to the serial console of the Cirros VM. Hit return / enter a few times and login with the displayed username and password. virtctl console testvmDisconnect from the virtual machine console by typing: ctrl+]. If you like to see the complete boot sequence logs from the console. You need to connect to the serial console just after starting the VM (you can test this by stopping and starting the VM again, see below). Controlling the State of the VM: To shut it down: virtctl stop testvmTo delete a Virtual Machine: kubectl delete vm testvmThis concludes this section of the lab. You can watch how the laboratory is done in the following video: Next Lab " + "body": " - Use KubeVirt You can experiment with this lab online at KillercodaCreate a Virtual Machine: Download the VM manifest and explore it. Note it uses a container disk and as such doesn’t persist data. Such container disks currently exist for alpine, cirros and fedora. wget https://kubevirt. io/labs/manifests/vm. yamlless vm. yamlApply the manifest to Kubernetes. kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlYou should see following results virtualmachine. kubevirt. io “testvm” createdvirtualmachineinstancepreset. kubevirt. io “small” created Manage Virtual Machines (optional):: To get a list of existing Virtual Machines. Note the running status. kubectl get vmskubectl get vms -o yaml testvmTo start a Virtual Machine you can use: virtctl start testvmIf you installed virtctl via krew, you can use kubectl virt: # Start the virtual machine:kubectl virt start testvm# Stop the virtual machine:kubectl virt stop testvmAlternatively you could use kubectl patch: # Start the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ runStrategy : Always }}'# Stop the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ runStrategy : Halted }}'Now that the Virtual Machine has been started, check the status (kubectl get vms). Note the Running status. You now want to see the instance of the vm you just started : kubectl get vmiskubectl get vmis -o yaml testvmNote the difference between VM (virtual machine) resource and VMI (virtual machine instance) resource. The VMI does not exist before starting the VM and the VMI will be deleted when you stop the VM. (Also note that restart of the VM is needed if you like to change some properties. Just modifying VM is not sufficient, the VMI has to be replaced. ) Accessing VMs (serial console): Connect to the serial console of the Cirros VM. Hit return / enter a few times and login with the displayed username and password. virtctl console testvmDisconnect from the virtual machine console by typing: ctrl+]. If you like to see the complete boot sequence logs from the console. You need to connect to the serial console just after starting the VM (you can test this by stopping and starting the VM again, see below). Controlling the State of the VM: To shut it down: virtctl stop testvmTo delete a Virtual Machine: kubectl delete vm testvmThis concludes this section of the lab. You can watch how the laboratory is done in the following video: Next Lab " }, { "id": 199, "url": "/labs/kubernetes/lab2", diff --git a/sitemap.xml b/sitemap.xml index 18ae7ce614..4de87759c8 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1690,6 +1690,6 @@ https://kubevirt.io//assets/files/summit24-sponsor.pdf -2024-10-15T10:11:02+00:00 +2024-10-30T13:09:03+00:00 diff --git a/sponsor/index.html b/sponsor/index.html index c5332c717b..60d26dfc65 100644 --- a/sponsor/index.html +++ b/sponsor/index.html @@ -53,7 +53,7 @@ - + diff --git a/ssp-operator/index.html b/ssp-operator/index.html index 9195975f18..d99d6960cd 100644 --- a/ssp-operator/index.html +++ b/ssp-operator/index.html @@ -53,7 +53,7 @@ - + diff --git a/summit/index.html b/summit/index.html index 4dd38a1b59..f55956fd37 100644 --- a/summit/index.html +++ b/summit/index.html @@ -53,7 +53,7 @@ - + diff --git a/tag/addons.html b/tag/addons.html index ae767169f7..4ae268d6d0 100644 --- a/tag/addons.html +++ b/tag/addons.html @@ -53,7 +53,7 @@ - + diff --git a/tag/admin-operations.html b/tag/admin-operations.html index 7527971960..9f96b80da8 100644 --- a/tag/admin-operations.html +++ b/tag/admin-operations.html @@ -53,7 +53,7 @@ - + diff --git a/tag/admin.html b/tag/admin.html index 513a62dd9f..1685fd9238 100644 --- a/tag/admin.html +++ b/tag/admin.html @@ -53,7 +53,7 @@ - + diff --git a/tag/advanced-vm-scheduling.html b/tag/advanced-vm-scheduling.html index 6c954ae498..3b61206565 100644 --- a/tag/advanced-vm-scheduling.html +++ b/tag/advanced-vm-scheduling.html @@ -53,7 +53,7 @@ - + diff --git a/tag/affinity.html b/tag/affinity.html index 56d8175c62..b0432073c5 100644 --- a/tag/affinity.html +++ b/tag/affinity.html @@ -53,7 +53,7 @@ - + diff --git a/tag/america.html b/tag/america.html index e26e6bef0d..14dd75b2c5 100644 --- a/tag/america.html +++ b/tag/america.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ami.html b/tag/ami.html index cafbbd1ffd..a4e7e1b186 100644 --- a/tag/ami.html +++ b/tag/ami.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ansible-collection.html b/tag/ansible-collection.html index defe81b7a9..335b9eeb14 100644 --- a/tag/ansible-collection.html +++ b/tag/ansible-collection.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ansible.html b/tag/ansible.html index b4025551b5..39e454242d 100644 --- a/tag/ansible.html +++ b/tag/ansible.html @@ -53,7 +53,7 @@ - + diff --git a/tag/api.html b/tag/api.html index 9cd3d74f33..cdb44317e4 100644 --- a/tag/api.html +++ b/tag/api.html @@ -53,7 +53,7 @@ - + diff --git a/tag/architecture.html b/tag/architecture.html index 09197c9b1e..6a602357e9 100644 --- a/tag/architecture.html +++ b/tag/architecture.html @@ -53,7 +53,7 @@ - + diff --git a/tag/authentication.html b/tag/authentication.html index 7fe12ad116..c48e5dd4ae 100644 --- a/tag/authentication.html +++ b/tag/authentication.html @@ -53,7 +53,7 @@ - + diff --git a/tag/autodeployer.html b/tag/autodeployer.html index ea53f9e4c9..6b7b20f8a4 100644 --- a/tag/autodeployer.html +++ b/tag/autodeployer.html @@ -53,7 +53,7 @@ - + diff --git a/tag/aws.html b/tag/aws.html index fc157210f2..4f5da68561 100644 --- a/tag/aws.html +++ b/tag/aws.html @@ -53,7 +53,7 @@ - + diff --git a/tag/basic-operations.html b/tag/basic-operations.html index da8d75d5ac..3c628e5359 100644 --- a/tag/basic-operations.html +++ b/tag/basic-operations.html @@ -53,7 +53,7 @@ - + diff --git a/tag/bridge.html b/tag/bridge.html index 2b603b542f..f993e18539 100644 --- a/tag/bridge.html +++ b/tag/bridge.html @@ -53,7 +53,7 @@ - + diff --git a/tag/build.html b/tag/build.html index 0d86c04ff5..3848d72240 100644 --- a/tag/build.html +++ b/tag/build.html @@ -53,7 +53,7 @@ - + diff --git a/tag/builder-tool.html b/tag/builder-tool.html index 0ba4fb3b83..fc8ea5a9fc 100644 --- a/tag/builder-tool.html +++ b/tag/builder-tool.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cdi.html b/tag/cdi.html index c25546584e..f5a039071e 100644 --- a/tag/cdi.html +++ b/tag/cdi.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ceph.html b/tag/ceph.html index 3da138953c..2c21549313 100644 --- a/tag/ceph.html +++ b/tag/ceph.html @@ -53,7 +53,7 @@ - + diff --git a/tag/changelog.html b/tag/changelog.html index 6cce8080e9..935c4d93e7 100644 --- a/tag/changelog.html +++ b/tag/changelog.html @@ -53,7 +53,7 @@ - + diff --git a/tag/chronyd.html b/tag/chronyd.html index e23efe72c8..82979cf54d 100644 --- a/tag/chronyd.html +++ b/tag/chronyd.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ci-cd.html b/tag/ci-cd.html index a53f17a95f..8863c7dda6 100644 --- a/tag/ci-cd.html +++ b/tag/ci-cd.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cicd.html b/tag/cicd.html index 9b021a7815..4f4548024c 100644 --- a/tag/cicd.html +++ b/tag/cicd.html @@ -53,7 +53,7 @@ - + diff --git a/tag/clearcontainers.html b/tag/clearcontainers.html index aafffdedcd..ffd1ac9357 100644 --- a/tag/clearcontainers.html +++ b/tag/clearcontainers.html @@ -53,7 +53,7 @@ - + diff --git a/tag/clone.html b/tag/clone.html index ae65b18012..d5fa1b755a 100644 --- a/tag/clone.html +++ b/tag/clone.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cloudnativecon.html b/tag/cloudnativecon.html index 930a4b202c..0b2a35472c 100644 --- a/tag/cloudnativecon.html +++ b/tag/cloudnativecon.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cluster-autoscaler.html b/tag/cluster-autoscaler.html index 47d077bf7e..f23f862f03 100644 --- a/tag/cluster-autoscaler.html +++ b/tag/cluster-autoscaler.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cluster-network-addons-operator.html b/tag/cluster-network-addons-operator.html index 65fe803748..615e18f55d 100644 --- a/tag/cluster-network-addons-operator.html +++ b/tag/cluster-network-addons-operator.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cnao.html b/tag/cnao.html index 52915c0137..e9000c3395 100644 --- a/tag/cnao.html +++ b/tag/cnao.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cncf.html b/tag/cncf.html index 626db52f54..62bdf767c8 100644 --- a/tag/cncf.html +++ b/tag/cncf.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cni.html b/tag/cni.html index 06a06233c6..8e51b0183d 100644 --- a/tag/cni.html +++ b/tag/cni.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cockpit.html b/tag/cockpit.html index 2f3b60958c..c8ef069709 100644 --- a/tag/cockpit.html +++ b/tag/cockpit.html @@ -53,7 +53,7 @@ - + diff --git a/tag/common-templates.html b/tag/common-templates.html index 34bbd1d35b..a96f1b4ef7 100644 --- a/tag/common-templates.html +++ b/tag/common-templates.html @@ -53,7 +53,7 @@ - + diff --git a/tag/community.html b/tag/community.html index 9aa841f4c1..3f0c3bf379 100644 --- a/tag/community.html +++ b/tag/community.html @@ -53,7 +53,7 @@ - + diff --git a/tag/composer-cli.html b/tag/composer-cli.html index 60a3eeae90..31a33284c1 100644 --- a/tag/composer-cli.html +++ b/tag/composer-cli.html @@ -53,7 +53,7 @@ - + diff --git a/tag/condition-types.html b/tag/condition-types.html index cb31da15e4..92880a46b9 100644 --- a/tag/condition-types.html +++ b/tag/condition-types.html @@ -53,7 +53,7 @@ - + diff --git a/tag/conference.html b/tag/conference.html index 2c79ebe0ee..981496cd8f 100644 --- a/tag/conference.html +++ b/tag/conference.html @@ -53,7 +53,7 @@ - + diff --git a/tag/connect-to-console.html b/tag/connect-to-console.html index bdf3fb5f14..baee1c7a2e 100644 --- a/tag/connect-to-console.html +++ b/tag/connect-to-console.html @@ -53,7 +53,7 @@ - + diff --git a/tag/connect-to-ssh.html b/tag/connect-to-ssh.html index f3a4ba23c0..4be460ee3f 100644 --- a/tag/connect-to-ssh.html +++ b/tag/connect-to-ssh.html @@ -53,7 +53,7 @@ - + diff --git a/tag/container.html b/tag/container.html index dba54b94e7..fc97c37a55 100644 --- a/tag/container.html +++ b/tag/container.html @@ -53,7 +53,7 @@ - + diff --git a/tag/containerdisk.html b/tag/containerdisk.html index 5fd34a083f..ba77999b12 100644 --- a/tag/containerdisk.html +++ b/tag/containerdisk.html @@ -53,7 +53,7 @@ - + diff --git a/tag/containerized-data-importer.html b/tag/containerized-data-importer.html index e2a3969df2..2fb22cd160 100644 --- a/tag/containerized-data-importer.html +++ b/tag/containerized-data-importer.html @@ -53,7 +53,7 @@ - + diff --git a/tag/continuous-integration.html b/tag/continuous-integration.html index f131249c8b..0da841d105 100644 --- a/tag/continuous-integration.html +++ b/tag/continuous-integration.html @@ -53,7 +53,7 @@ - + diff --git a/tag/contra-lib.html b/tag/contra-lib.html index 59ac7857ce..4527ca45e2 100644 --- a/tag/contra-lib.html +++ b/tag/contra-lib.html @@ -53,7 +53,7 @@ - + diff --git a/tag/coreos.html b/tag/coreos.html index 3a8004d25d..e92f62bb55 100644 --- a/tag/coreos.html +++ b/tag/coreos.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cpu-pinning.html b/tag/cpu-pinning.html index 92c6af73a0..851c90689d 100644 --- a/tag/cpu-pinning.html +++ b/tag/cpu-pinning.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cpumanager.html b/tag/cpumanager.html index b6f210dfab..0cd486d0b1 100644 --- a/tag/cpumanager.html +++ b/tag/cpumanager.html @@ -53,7 +53,7 @@ - + diff --git a/tag/create-vm.html b/tag/create-vm.html index 2ac617eb2a..5d85757974 100644 --- a/tag/create-vm.html +++ b/tag/create-vm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cri-o.html b/tag/cri-o.html index 66634dd6bf..7fc44343cc 100644 --- a/tag/cri-o.html +++ b/tag/cri-o.html @@ -53,7 +53,7 @@ - + diff --git a/tag/cri.html b/tag/cri.html index fd4ec4b323..fb6c2e696e 100644 --- a/tag/cri.html +++ b/tag/cri.html @@ -53,7 +53,7 @@ - + diff --git a/tag/custom-resources.html b/tag/custom-resources.html index 5e9401e884..28e24f98db 100644 --- a/tag/custom-resources.html +++ b/tag/custom-resources.html @@ -53,7 +53,7 @@ - + diff --git a/tag/datavolumes.html b/tag/datavolumes.html index 19d11d4168..5d8409d4dd 100644 --- a/tag/datavolumes.html +++ b/tag/datavolumes.html @@ -53,7 +53,7 @@ - + diff --git a/tag/debug.html b/tag/debug.html index c76eee9543..39a81a1013 100644 --- a/tag/debug.html +++ b/tag/debug.html @@ -53,7 +53,7 @@ - + diff --git a/tag/dedicated-network.html b/tag/dedicated-network.html index a30287b825..67e934c390 100644 --- a/tag/dedicated-network.html +++ b/tag/dedicated-network.html @@ -53,7 +53,7 @@ - + diff --git a/tag/design.html b/tag/design.html index 3f58151738..f839548e73 100644 --- a/tag/design.html +++ b/tag/design.html @@ -53,7 +53,7 @@ - + diff --git a/tag/development.html b/tag/development.html index 646429fe93..0e5b372c16 100644 --- a/tag/development.html +++ b/tag/development.html @@ -53,7 +53,7 @@ - + diff --git a/tag/device-plugins.html b/tag/device-plugins.html index 4d872ac31b..6a99f3519f 100644 --- a/tag/device-plugins.html +++ b/tag/device-plugins.html @@ -53,7 +53,7 @@ - + diff --git a/tag/disk-image.html b/tag/disk-image.html index 0eca8e8050..4208623fa1 100644 --- a/tag/disk-image.html +++ b/tag/disk-image.html @@ -53,7 +53,7 @@ - + diff --git a/tag/docker.html b/tag/docker.html index 4080229f6a..2a4bff5093 100644 --- a/tag/docker.html +++ b/tag/docker.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ebtables.html b/tag/ebtables.html index 5b39155000..8469324f30 100644 --- a/tag/ebtables.html +++ b/tag/ebtables.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ec2.html b/tag/ec2.html index c899442e39..7be7f3c176 100644 --- a/tag/ec2.html +++ b/tag/ec2.html @@ -53,7 +53,7 @@ - + diff --git a/tag/eks.html b/tag/eks.html index 5e36551842..47b18c96f9 100644 --- a/tag/eks.html +++ b/tag/eks.html @@ -53,7 +53,7 @@ - + diff --git a/tag/event.html b/tag/event.html index b2375b1ce4..471947c034 100644 --- a/tag/event.html +++ b/tag/event.html @@ -53,7 +53,7 @@ - + diff --git a/tag/eviction.html b/tag/eviction.html index b538e824f6..690f0e270e 100644 --- a/tag/eviction.html +++ b/tag/eviction.html @@ -53,7 +53,7 @@ - + diff --git a/tag/federation.html b/tag/federation.html index 274c6b6939..718e2787ad 100644 --- a/tag/federation.html +++ b/tag/federation.html @@ -53,7 +53,7 @@ - + diff --git a/tag/fedora.html b/tag/fedora.html index 2c05698564..2e9d6b4f21 100644 --- a/tag/fedora.html +++ b/tag/fedora.html @@ -53,7 +53,7 @@ - + diff --git a/tag/flannel.html b/tag/flannel.html index 656d9bb04d..86d7ae12da 100644 --- a/tag/flannel.html +++ b/tag/flannel.html @@ -53,7 +53,7 @@ - + diff --git a/tag/gathering.html b/tag/gathering.html index 85a0cd0dff..2231e1e611 100644 --- a/tag/gathering.html +++ b/tag/gathering.html @@ -53,7 +53,7 @@ - + diff --git a/tag/gcp.html b/tag/gcp.html index be1ca594e7..d8d2e173ea 100644 --- a/tag/gcp.html +++ b/tag/gcp.html @@ -53,7 +53,7 @@ - + diff --git a/tag/glusterfs.html b/tag/glusterfs.html index 20d97ed2d3..90ab2c48b2 100644 --- a/tag/glusterfs.html +++ b/tag/glusterfs.html @@ -53,7 +53,7 @@ - + diff --git a/tag/go.html b/tag/go.html index c52110578b..fd517ef0e0 100644 --- a/tag/go.html +++ b/tag/go.html @@ -53,7 +53,7 @@ - + diff --git a/tag/gpu-workloads.html b/tag/gpu-workloads.html index 02a4ea579a..4d4fd3ee0b 100644 --- a/tag/gpu-workloads.html +++ b/tag/gpu-workloads.html @@ -53,7 +53,7 @@ - + diff --git a/tag/gpu.html b/tag/gpu.html index cb90230044..0a5d62c26d 100644 --- a/tag/gpu.html +++ b/tag/gpu.html @@ -53,7 +53,7 @@ - + diff --git a/tag/grafana.html b/tag/grafana.html index 0cdb713a29..6fea562479 100644 --- a/tag/grafana.html +++ b/tag/grafana.html @@ -53,7 +53,7 @@ - + diff --git a/tag/hco.html b/tag/hco.html index aa5045d3e8..5e02443368 100644 --- a/tag/hco.html +++ b/tag/hco.html @@ -53,7 +53,7 @@ - + diff --git a/tag/heketi.html b/tag/heketi.html index ef2ba61a20..8390f30665 100644 --- a/tag/heketi.html +++ b/tag/heketi.html @@ -53,7 +53,7 @@ - + diff --git a/tag/hilights.html b/tag/hilights.html index a5182cea1b..66eec4af9a 100644 --- a/tag/hilights.html +++ b/tag/hilights.html @@ -53,7 +53,7 @@ - + diff --git a/tag/homelab.html b/tag/homelab.html index 1b10709ccf..d3babf7379 100644 --- a/tag/homelab.html +++ b/tag/homelab.html @@ -53,7 +53,7 @@ - + diff --git a/tag/hugepages.html b/tag/hugepages.html index 1ea8bf43fc..5e3c165c8f 100644 --- a/tag/hugepages.html +++ b/tag/hugepages.html @@ -53,7 +53,7 @@ - + diff --git a/tag/hyperconverged-operator.html b/tag/hyperconverged-operator.html index 178aa307b0..027e34270a 100644 --- a/tag/hyperconverged-operator.html +++ b/tag/hyperconverged-operator.html @@ -53,7 +53,7 @@ - + diff --git a/tag/iac.html b/tag/iac.html index fc324d789d..df7a27a648 100644 --- a/tag/iac.html +++ b/tag/iac.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ignition.html b/tag/ignition.html index f13a66c4db..40895f96e2 100644 --- a/tag/ignition.html +++ b/tag/ignition.html @@ -53,7 +53,7 @@ - + diff --git a/tag/images.html b/tag/images.html index 4b2b77aec8..24949d83a4 100644 --- a/tag/images.html +++ b/tag/images.html @@ -53,7 +53,7 @@ - + diff --git a/tag/import.html b/tag/import.html index 8b58d14092..557ceca50c 100644 --- a/tag/import.html +++ b/tag/import.html @@ -53,7 +53,7 @@ - + diff --git a/tag/infrastructure.html b/tag/infrastructure.html index c07002c7f7..c75e740b00 100644 --- a/tag/infrastructure.html +++ b/tag/infrastructure.html @@ -53,7 +53,7 @@ - + diff --git a/tag/installing-kubevirt.html b/tag/installing-kubevirt.html index 570538afa9..ac43a5e5a6 100644 --- a/tag/installing-kubevirt.html +++ b/tag/installing-kubevirt.html @@ -53,7 +53,7 @@ - + diff --git a/tag/instancetypes.html b/tag/instancetypes.html index 0edb4e9024..b7a405c4f0 100644 --- a/tag/instancetypes.html +++ b/tag/instancetypes.html @@ -53,7 +53,7 @@ - + diff --git a/tag/intel.html b/tag/intel.html index 32205fbe90..c3bed3e8ad 100644 --- a/tag/intel.html +++ b/tag/intel.html @@ -53,7 +53,7 @@ - + diff --git a/tag/iptables.html b/tag/iptables.html index 8b2899120a..4109373743 100644 --- a/tag/iptables.html +++ b/tag/iptables.html @@ -53,7 +53,7 @@ - + diff --git a/tag/istio.html b/tag/istio.html index 80333f6972..cce0955283 100644 --- a/tag/istio.html +++ b/tag/istio.html @@ -53,7 +53,7 @@ - + diff --git a/tag/jenkins.html b/tag/jenkins.html index 2994bf3b50..cf2603d68c 100644 --- a/tag/jenkins.html +++ b/tag/jenkins.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubecon.html b/tag/kubecon.html index 3be32105df..81787a04b6 100644 --- a/tag/kubecon.html +++ b/tag/kubecon.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubefed.html b/tag/kubefed.html index 5b26e96276..644b77bc25 100644 --- a/tag/kubefed.html +++ b/tag/kubefed.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubernetes-nmstate.html b/tag/kubernetes-nmstate.html index 45228ea04e..6ac63ac5ad 100644 --- a/tag/kubernetes-nmstate.html +++ b/tag/kubernetes-nmstate.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubernetes.html b/tag/kubernetes.html index 6beb63f6ca..059f197e1d 100644 --- a/tag/kubernetes.html +++ b/tag/kubernetes.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubetron.html b/tag/kubetron.html index 18700ab9b9..d9adcc1bc4 100644 --- a/tag/kubetron.html +++ b/tag/kubetron.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-ansible.html b/tag/kubevirt-ansible.html index a9c5bf6301..ea7306aeea 100644 --- a/tag/kubevirt-ansible.html +++ b/tag/kubevirt-ansible.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-hyperconverged.html b/tag/kubevirt-hyperconverged.html index 5355261a38..ba83f90f07 100644 --- a/tag/kubevirt-hyperconverged.html +++ b/tag/kubevirt-hyperconverged.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-installation.html b/tag/kubevirt-installation.html index 5aaf830e62..2c69be449d 100644 --- a/tag/kubevirt-installation.html +++ b/tag/kubevirt-installation.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-objects.html b/tag/kubevirt-objects.html index 17ca7d171e..089b0abc4b 100644 --- a/tag/kubevirt-objects.html +++ b/tag/kubevirt-objects.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-tekton-tasks.html b/tag/kubevirt-tekton-tasks.html index f7adf45111..946f7ae023 100644 --- a/tag/kubevirt-tekton-tasks.html +++ b/tag/kubevirt-tekton-tasks.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-tutorial.html b/tag/kubevirt-tutorial.html index 9ebbbf7905..e3850188a2 100644 --- a/tag/kubevirt-tutorial.html +++ b/tag/kubevirt-tutorial.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt-upgrade.html b/tag/kubevirt-upgrade.html index 4679c45e97..fbd616fb10 100644 --- a/tag/kubevirt-upgrade.html +++ b/tag/kubevirt-upgrade.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt.core.html b/tag/kubevirt.core.html index 300288ca88..e087e55970 100644 --- a/tag/kubevirt.core.html +++ b/tag/kubevirt.core.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirt.html b/tag/kubevirt.html index e8b104e266..30870e6b78 100644 --- a/tag/kubevirt.html +++ b/tag/kubevirt.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kubevirtci.html b/tag/kubevirtci.html index 1c1443bd3b..8ed92a608f 100644 --- a/tag/kubevirtci.html +++ b/tag/kubevirtci.html @@ -53,7 +53,7 @@ - + diff --git a/tag/kvm.html b/tag/kvm.html index 037c6f4a2c..a8b81c5091 100644 --- a/tag/kvm.html +++ b/tag/kvm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/lab.html b/tag/lab.html index 1d1d6c6ba0..cd3c821b92 100644 --- a/tag/lab.html +++ b/tag/lab.html @@ -53,7 +53,7 @@ - + diff --git a/tag/laboratory.html b/tag/laboratory.html index c24ee3025a..af89b635ee 100644 --- a/tag/laboratory.html +++ b/tag/laboratory.html @@ -53,7 +53,7 @@ - + diff --git a/tag/libvirt.html b/tag/libvirt.html index 5347799a3a..b4a91f4e58 100644 --- a/tag/libvirt.html +++ b/tag/libvirt.html @@ -53,7 +53,7 @@ - + diff --git a/tag/lifecycle.html b/tag/lifecycle.html index 518787a117..88ec21ee42 100644 --- a/tag/lifecycle.html +++ b/tag/lifecycle.html @@ -53,7 +53,7 @@ - + diff --git a/tag/live-migration.html b/tag/live-migration.html index 0b4d70382d..0a0b6732ec 100644 --- a/tag/live-migration.html +++ b/tag/live-migration.html @@ -53,7 +53,7 @@ - + diff --git a/tag/load-balancer.html b/tag/load-balancer.html index 3b2d611c16..1a926ea0c8 100644 --- a/tag/load-balancer.html +++ b/tag/load-balancer.html @@ -53,7 +53,7 @@ - + diff --git a/tag/memory.html b/tag/memory.html index 084d890af7..16576e657e 100644 --- a/tag/memory.html +++ b/tag/memory.html @@ -53,7 +53,7 @@ - + diff --git a/tag/mesh.html b/tag/mesh.html index 23c2507952..3cbe3a5f07 100644 --- a/tag/mesh.html +++ b/tag/mesh.html @@ -53,7 +53,7 @@ - + diff --git a/tag/metallb.html b/tag/metallb.html index a2c93ec725..a96506aca0 100644 --- a/tag/metallb.html +++ b/tag/metallb.html @@ -53,7 +53,7 @@ - + diff --git a/tag/metrics.html b/tag/metrics.html index cda4e3db04..b24abf65d9 100644 --- a/tag/metrics.html +++ b/tag/metrics.html @@ -53,7 +53,7 @@ - + diff --git a/tag/microsoft-windows-container.html b/tag/microsoft-windows-container.html index 1a7777e71f..0a0ec55212 100644 --- a/tag/microsoft-windows-container.html +++ b/tag/microsoft-windows-container.html @@ -53,7 +53,7 @@ - + diff --git a/tag/microsoft-windows-kubernetes.html b/tag/microsoft-windows-kubernetes.html index 5b188710d4..52d3cce226 100644 --- a/tag/microsoft-windows-kubernetes.html +++ b/tag/microsoft-windows-kubernetes.html @@ -53,7 +53,7 @@ - + diff --git a/tag/milestone.html b/tag/milestone.html index 56093778ec..08134a4212 100644 --- a/tag/milestone.html +++ b/tag/milestone.html @@ -53,7 +53,7 @@ - + diff --git a/tag/minikube.html b/tag/minikube.html index 6c25e15b4a..392a0dfba8 100644 --- a/tag/minikube.html +++ b/tag/minikube.html @@ -53,7 +53,7 @@ - + diff --git a/tag/monitoring.html b/tag/monitoring.html index 55d184d0f2..f0b179d9a0 100644 --- a/tag/monitoring.html +++ b/tag/monitoring.html @@ -53,7 +53,7 @@ - + diff --git a/tag/multicluster.html b/tag/multicluster.html index 79ed0a0f71..c0ecfbe4cc 100644 --- a/tag/multicluster.html +++ b/tag/multicluster.html @@ -53,7 +53,7 @@ - + diff --git a/tag/multiple-networks.html b/tag/multiple-networks.html index c0a395eb7f..e2e765b595 100644 --- a/tag/multiple-networks.html +++ b/tag/multiple-networks.html @@ -53,7 +53,7 @@ - + diff --git a/tag/multus.html b/tag/multus.html index 1d4334c039..30c29f2e65 100644 --- a/tag/multus.html +++ b/tag/multus.html @@ -53,7 +53,7 @@ - + diff --git a/tag/network.html b/tag/network.html index a19009a2ee..c4e038d981 100644 --- a/tag/network.html +++ b/tag/network.html @@ -53,7 +53,7 @@ - + diff --git a/tag/networking.html b/tag/networking.html index f2f941940a..b7fd8dca61 100644 --- a/tag/networking.html +++ b/tag/networking.html @@ -53,7 +53,7 @@ - + diff --git a/tag/networkpolicy.html b/tag/networkpolicy.html index 6dc846cbd6..f47d7e77d7 100644 --- a/tag/networkpolicy.html +++ b/tag/networkpolicy.html @@ -53,7 +53,7 @@ - + diff --git a/tag/neutron.html b/tag/neutron.html index 20fd4203f0..381e91f96c 100644 --- a/tag/neutron.html +++ b/tag/neutron.html @@ -53,7 +53,7 @@ - + diff --git a/tag/nmo.html b/tag/nmo.html index e9f1c16c43..46be9ee992 100644 --- a/tag/nmo.html +++ b/tag/nmo.html @@ -53,7 +53,7 @@ - + diff --git a/tag/nmstate.html b/tag/nmstate.html index 8514c45cd6..b7e39d013e 100644 --- a/tag/nmstate.html +++ b/tag/nmstate.html @@ -53,7 +53,7 @@ - + diff --git a/tag/node-drain.html b/tag/node-drain.html index f5580448ea..7b62169e36 100644 --- a/tag/node-drain.html +++ b/tag/node-drain.html @@ -53,7 +53,7 @@ - + diff --git a/tag/node-exporter.html b/tag/node-exporter.html index bf6bd7d191..091f9b97be 100644 --- a/tag/node-exporter.html +++ b/tag/node-exporter.html @@ -53,7 +53,7 @@ - + diff --git a/tag/novnc.html b/tag/novnc.html index 8e6cf106b3..6fc0d52641 100644 --- a/tag/novnc.html +++ b/tag/novnc.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ntp.html b/tag/ntp.html index f69b02f0e0..398b6cbd68 100644 --- a/tag/ntp.html +++ b/tag/ntp.html @@ -53,7 +53,7 @@ - + diff --git a/tag/numa.html b/tag/numa.html index f491cc22c2..347ad8e51a 100644 --- a/tag/numa.html +++ b/tag/numa.html @@ -53,7 +53,7 @@ - + diff --git a/tag/nvidia.html b/tag/nvidia.html index 576865445e..81721a2488 100644 --- a/tag/nvidia.html +++ b/tag/nvidia.html @@ -53,7 +53,7 @@ - + diff --git a/tag/objects.html b/tag/objects.html index 3f23a50f39..1a00841a13 100644 --- a/tag/objects.html +++ b/tag/objects.html @@ -53,7 +53,7 @@ - + diff --git a/tag/octant.html b/tag/octant.html index 2fce089b8b..a628a2f347 100644 --- a/tag/octant.html +++ b/tag/octant.html @@ -53,7 +53,7 @@ - + diff --git a/tag/okd-console.html b/tag/okd-console.html index 8b2872359a..fc7aac6c97 100644 --- a/tag/okd-console.html +++ b/tag/okd-console.html @@ -53,7 +53,7 @@ - + diff --git a/tag/okd.html b/tag/okd.html index 72e0b7a449..8c9ae01fd1 100644 --- a/tag/okd.html +++ b/tag/okd.html @@ -53,7 +53,7 @@ - + diff --git a/tag/openshift-console.html b/tag/openshift-console.html index 4a52351e2a..f47dca4e5d 100644 --- a/tag/openshift-console.html +++ b/tag/openshift-console.html @@ -53,7 +53,7 @@ - + diff --git a/tag/openshift-web-console.html b/tag/openshift-web-console.html index 15df49a62e..c27d641a17 100644 --- a/tag/openshift-web-console.html +++ b/tag/openshift-web-console.html @@ -53,7 +53,7 @@ - + diff --git a/tag/openshift.html b/tag/openshift.html index fc69b34fe6..df122ded6e 100644 --- a/tag/openshift.html +++ b/tag/openshift.html @@ -53,7 +53,7 @@ - + diff --git a/tag/openstack.html b/tag/openstack.html index 16e20a8e2d..3b84ec3e52 100644 --- a/tag/openstack.html +++ b/tag/openstack.html @@ -53,7 +53,7 @@ - + diff --git a/tag/operation.html b/tag/operation.html index cb2a2e3220..f5a4f8e5ac 100644 --- a/tag/operation.html +++ b/tag/operation.html @@ -53,7 +53,7 @@ - + diff --git a/tag/operations.html b/tag/operations.html index bda3d498b7..d2dc691f08 100644 --- a/tag/operations.html +++ b/tag/operations.html @@ -53,7 +53,7 @@ - + diff --git a/tag/operator-manual.html b/tag/operator-manual.html index ec2eb5d9e9..f75e467c36 100644 --- a/tag/operator-manual.html +++ b/tag/operator-manual.html @@ -53,7 +53,7 @@ - + diff --git a/tag/overcommitment.html b/tag/overcommitment.html index 929eb6c88c..6429274194 100644 --- a/tag/overcommitment.html +++ b/tag/overcommitment.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ovirt.html b/tag/ovirt.html index ad31b650cc..889992f2b6 100644 --- a/tag/ovirt.html +++ b/tag/ovirt.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ovn.html b/tag/ovn.html index 4c16516407..3290288803 100644 --- a/tag/ovn.html +++ b/tag/ovn.html @@ -53,7 +53,7 @@ - + diff --git a/tag/ovs-cni.html b/tag/ovs-cni.html index 878dd8a894..c30abec48e 100644 --- a/tag/ovs-cni.html +++ b/tag/ovs-cni.html @@ -53,7 +53,7 @@ - + diff --git a/tag/party-time.html b/tag/party-time.html index a3c4ca2a84..03ea69c255 100644 --- a/tag/party-time.html +++ b/tag/party-time.html @@ -53,7 +53,7 @@ - + diff --git a/tag/pass-through.html b/tag/pass-through.html index bbfbfbe568..5a220d50cb 100644 --- a/tag/pass-through.html +++ b/tag/pass-through.html @@ -53,7 +53,7 @@ - + diff --git a/tag/passthrough.html b/tag/passthrough.html index d83d518a1a..acd926f9d2 100644 --- a/tag/passthrough.html +++ b/tag/passthrough.html @@ -53,7 +53,7 @@ - + diff --git a/tag/preferences.html b/tag/preferences.html index 02b30f6f68..72abb568d7 100644 --- a/tag/preferences.html +++ b/tag/preferences.html @@ -53,7 +53,7 @@ - + diff --git a/tag/prometheus-operator.html b/tag/prometheus-operator.html index c419a35aca..b18bd8f36c 100644 --- a/tag/prometheus-operator.html +++ b/tag/prometheus-operator.html @@ -53,7 +53,7 @@ - + diff --git a/tag/prometheus.html b/tag/prometheus.html index 730a2a3500..dd2e020b78 100644 --- a/tag/prometheus.html +++ b/tag/prometheus.html @@ -53,7 +53,7 @@ - + diff --git a/tag/prow.html b/tag/prow.html index bba821336d..0a7af0530c 100644 --- a/tag/prow.html +++ b/tag/prow.html @@ -53,7 +53,7 @@ - + diff --git a/tag/qemu.html b/tag/qemu.html index 122173f5eb..ed80df4363 100644 --- a/tag/qemu.html +++ b/tag/qemu.html @@ -53,7 +53,7 @@ - + diff --git a/tag/quickstart.html b/tag/quickstart.html index 568774c16f..1ecefccf26 100644 --- a/tag/quickstart.html +++ b/tag/quickstart.html @@ -53,7 +53,7 @@ - + diff --git a/tag/rbac.html b/tag/rbac.html index e685ffedc9..0d56def669 100644 --- a/tag/rbac.html +++ b/tag/rbac.html @@ -53,7 +53,7 @@ - + diff --git a/tag/real-time.html b/tag/real-time.html index ba024afb8d..2f0d28f9e7 100644 --- a/tag/real-time.html +++ b/tag/real-time.html @@ -53,7 +53,7 @@ - + diff --git a/tag/registry.html b/tag/registry.html index 5b23e1329f..2aab252394 100644 --- a/tag/registry.html +++ b/tag/registry.html @@ -53,7 +53,7 @@ - + diff --git a/tag/release-notes.html b/tag/release-notes.html index 77a1978921..8d1ec3a5d8 100644 --- a/tag/release-notes.html +++ b/tag/release-notes.html @@ -53,7 +53,7 @@ - + diff --git a/tag/release.html b/tag/release.html index e1f4e37e13..0b0c1bb98f 100644 --- a/tag/release.html +++ b/tag/release.html @@ -53,7 +53,7 @@ - + diff --git a/tag/remove-vm.html b/tag/remove-vm.html index 6d1fce1145..6433ebed83 100644 --- a/tag/remove-vm.html +++ b/tag/remove-vm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/review.html b/tag/review.html index b7b9209626..2ecb159500 100644 --- a/tag/review.html +++ b/tag/review.html @@ -53,7 +53,7 @@ - + diff --git a/tag/rhcos.html b/tag/rhcos.html index 44d54f884f..2a2dafe17f 100644 --- a/tag/rhcos.html +++ b/tag/rhcos.html @@ -53,7 +53,7 @@ - + diff --git a/tag/roadmap.html b/tag/roadmap.html index 4a9b9ec28a..45ac397881 100644 --- a/tag/roadmap.html +++ b/tag/roadmap.html @@ -53,7 +53,7 @@ - + diff --git a/tag/roles.html b/tag/roles.html index 259b9b485b..af9d865542 100644 --- a/tag/roles.html +++ b/tag/roles.html @@ -53,7 +53,7 @@ - + diff --git a/tag/rook.html b/tag/rook.html index ad68bc26f7..9005842d98 100644 --- a/tag/rook.html +++ b/tag/rook.html @@ -53,7 +53,7 @@ - + diff --git a/tag/sandbox.html b/tag/sandbox.html index d0830a1698..8576e63057 100644 --- a/tag/sandbox.html +++ b/tag/sandbox.html @@ -53,7 +53,7 @@ - + diff --git a/tag/scheduling.html b/tag/scheduling.html index d05d5b5212..3e7e4372ba 100644 --- a/tag/scheduling.html +++ b/tag/scheduling.html @@ -53,7 +53,7 @@ - + diff --git a/tag/sdn.html b/tag/sdn.html index e0fb9be45c..ff042e531b 100644 --- a/tag/sdn.html +++ b/tag/sdn.html @@ -53,7 +53,7 @@ - + diff --git a/tag/security.html b/tag/security.html index 6ef377c716..5a8145820e 100644 --- a/tag/security.html +++ b/tag/security.html @@ -53,7 +53,7 @@ - + diff --git a/tag/service-mesh.html b/tag/service-mesh.html index c0ef422acd..8a982d6597 100644 --- a/tag/service-mesh.html +++ b/tag/service-mesh.html @@ -53,7 +53,7 @@ - + diff --git a/tag/serviceaccount.html b/tag/serviceaccount.html index b7259f814c..0cdf24e99b 100644 --- a/tag/serviceaccount.html +++ b/tag/serviceaccount.html @@ -53,7 +53,7 @@ - + diff --git a/tag/skydive.html b/tag/skydive.html index 9117c36c08..139e1030a6 100644 --- a/tag/skydive.html +++ b/tag/skydive.html @@ -53,7 +53,7 @@ - + diff --git a/tag/start-vm.html b/tag/start-vm.html index 77ea5018d5..a2392561ee 100644 --- a/tag/start-vm.html +++ b/tag/start-vm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/stop-vm.html b/tag/stop-vm.html index 76679d483e..7e7e95a623 100644 --- a/tag/stop-vm.html +++ b/tag/stop-vm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/storage.html b/tag/storage.html index db6562b4bf..3d64b43e3e 100644 --- a/tag/storage.html +++ b/tag/storage.html @@ -53,7 +53,7 @@ - + diff --git a/tag/talk.html b/tag/talk.html index 1fce3d3de9..0052170613 100644 --- a/tag/talk.html +++ b/tag/talk.html @@ -53,7 +53,7 @@ - + diff --git a/tag/tekton-pipelines.html b/tag/tekton-pipelines.html index 2acc4aa65c..e9c78d0d48 100644 --- a/tag/tekton-pipelines.html +++ b/tag/tekton-pipelines.html @@ -53,7 +53,7 @@ - + diff --git a/tag/topologykeys.html b/tag/topologykeys.html index 521ecf2eb1..83e4b301fc 100644 --- a/tag/topologykeys.html +++ b/tag/topologykeys.html @@ -53,7 +53,7 @@ - + diff --git a/tag/tproxy.html b/tag/tproxy.html index 09a09a38b5..daaff40d57 100644 --- a/tag/tproxy.html +++ b/tag/tproxy.html @@ -53,7 +53,7 @@ - + diff --git a/tag/unit-testing.html b/tag/unit-testing.html index 6e55e64824..eb913f55ba 100644 --- a/tag/unit-testing.html +++ b/tag/unit-testing.html @@ -53,7 +53,7 @@ - + diff --git a/tag/upgrading.html b/tag/upgrading.html index 0dfa1f62cc..87f1e8df9b 100644 --- a/tag/upgrading.html +++ b/tag/upgrading.html @@ -53,7 +53,7 @@ - + diff --git a/tag/upload.html b/tag/upload.html index 305cf3d9e6..d87cadba80 100644 --- a/tag/upload.html +++ b/tag/upload.html @@ -53,7 +53,7 @@ - + diff --git a/tag/use-kubevirt.html b/tag/use-kubevirt.html index b6ec163400..e9538bd52e 100644 --- a/tag/use-kubevirt.html +++ b/tag/use-kubevirt.html @@ -53,7 +53,7 @@ - + diff --git a/tag/user-interface.html b/tag/user-interface.html index c3116e5cc4..05e7716c30 100644 --- a/tag/user-interface.html +++ b/tag/user-interface.html @@ -53,7 +53,7 @@ - + diff --git a/tag/v1.0.html b/tag/v1.0.html index 30728a62c1..20ed1d00a3 100644 --- a/tag/v1.0.html +++ b/tag/v1.0.html @@ -53,7 +53,7 @@ - + diff --git a/tag/v1.1.0.html b/tag/v1.1.0.html index 2f7cef580c..bbd0c4e40d 100644 --- a/tag/v1.1.0.html +++ b/tag/v1.1.0.html @@ -53,7 +53,7 @@ - + diff --git a/tag/vagrant.html b/tag/vagrant.html index b87357e167..df2d49a511 100644 --- a/tag/vagrant.html +++ b/tag/vagrant.html @@ -53,7 +53,7 @@ - + diff --git a/tag/vgpu.html b/tag/vgpu.html index f4b4fdc167..90cabb62d8 100644 --- a/tag/vgpu.html +++ b/tag/vgpu.html @@ -53,7 +53,7 @@ - + diff --git a/tag/video.html b/tag/video.html index a66a5f6e3b..8e99c15c4b 100644 --- a/tag/video.html +++ b/tag/video.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virt-customize.html b/tag/virt-customize.html index 37d7962b4f..20f927423a 100644 --- a/tag/virt-customize.html +++ b/tag/virt-customize.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtlet.html b/tag/virtlet.html index a363dec233..de204f278e 100644 --- a/tag/virtlet.html +++ b/tag/virtlet.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtual-machine-management.html b/tag/virtual-machine-management.html index 63e6cd9b6a..01d95916b6 100644 --- a/tag/virtual-machine-management.html +++ b/tag/virtual-machine-management.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtual-machine.html b/tag/virtual-machine.html index a590131cfb..f4447630e1 100644 --- a/tag/virtual-machine.html +++ b/tag/virtual-machine.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtual-machines.html b/tag/virtual-machines.html index e3229dfddd..394a533be2 100644 --- a/tag/virtual-machines.html +++ b/tag/virtual-machines.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtualmachine.html b/tag/virtualmachine.html index ee00d3c9c9..93eb70a316 100644 --- a/tag/virtualmachine.html +++ b/tag/virtualmachine.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtualmachineinstancetype.html b/tag/virtualmachineinstancetype.html index 8f2beb65cf..4342a60075 100644 --- a/tag/virtualmachineinstancetype.html +++ b/tag/virtualmachineinstancetype.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtualmachinepreference.html b/tag/virtualmachinepreference.html index 3fca1b53c8..58c272ac46 100644 --- a/tag/virtualmachinepreference.html +++ b/tag/virtualmachinepreference.html @@ -53,7 +53,7 @@ - + diff --git a/tag/virtvnc.html b/tag/virtvnc.html index 9935bca63a..f144e1b6a0 100644 --- a/tag/virtvnc.html +++ b/tag/virtvnc.html @@ -53,7 +53,7 @@ - + diff --git a/tag/vm-import.html b/tag/vm-import.html index 2b21506e32..df5945a917 100644 --- a/tag/vm-import.html +++ b/tag/vm-import.html @@ -53,7 +53,7 @@ - + diff --git a/tag/vm.html b/tag/vm.html index 59a1252203..746cb526ee 100644 --- a/tag/vm.html +++ b/tag/vm.html @@ -53,7 +53,7 @@ - + diff --git a/tag/volume-types.html b/tag/volume-types.html index 4f2aa1b8c4..19926400a8 100644 --- a/tag/volume-types.html +++ b/tag/volume-types.html @@ -53,7 +53,7 @@ - + diff --git a/tag/vscode.html b/tag/vscode.html index 1f306062e9..538bc0ea30 100644 --- a/tag/vscode.html +++ b/tag/vscode.html @@ -53,7 +53,7 @@ - + diff --git a/tag/weavenet.html b/tag/weavenet.html index 8c26bb5df3..95dd88238e 100644 --- a/tag/weavenet.html +++ b/tag/weavenet.html @@ -53,7 +53,7 @@ - + diff --git a/tag/web-interface.html b/tag/web-interface.html index d84a404632..19fc23d1db 100644 --- a/tag/web-interface.html +++ b/tag/web-interface.html @@ -53,7 +53,7 @@ - + diff --git a/tag/website.html b/tag/website.html index b95bb3720a..e49e92965f 100644 --- a/tag/website.html +++ b/tag/website.html @@ -53,7 +53,7 @@ - + diff --git a/tag/windows.html b/tag/windows.html index 3ad11a83a8..7ef4174961 100644 --- a/tag/windows.html +++ b/tag/windows.html @@ -53,7 +53,7 @@ - + diff --git a/videos/community/meetings.html b/videos/community/meetings.html index b089ed02ae..af82d8f048 100644 --- a/videos/community/meetings.html +++ b/videos/community/meetings.html @@ -53,7 +53,7 @@ - + diff --git a/videos/index.html b/videos/index.html index b1ee123fed..103c9f485c 100644 --- a/videos/index.html +++ b/videos/index.html @@ -53,7 +53,7 @@ - + diff --git a/videos/interviews.html b/videos/interviews.html index 07353c1956..cdbb25801d 100644 --- a/videos/interviews.html +++ b/videos/interviews.html @@ -53,7 +53,7 @@ - + diff --git a/videos/kubevirt-summit.html b/videos/kubevirt-summit.html index cc02478ace..aa95b5ce0c 100644 --- a/videos/kubevirt-summit.html +++ b/videos/kubevirt-summit.html @@ -53,7 +53,7 @@ - + diff --git a/videos/talks.html b/videos/talks.html index 2e48d6b53a..24721e2123 100644 --- a/videos/talks.html +++ b/videos/talks.html @@ -53,7 +53,7 @@ - + diff --git a/videos/tech-demos.html b/videos/tech-demos.html index ced9c0083c..4dd5ef40be 100644 --- a/videos/tech-demos.html +++ b/videos/tech-demos.html @@ -53,7 +53,7 @@ - +