From 20031244f334297ab24b9d73f1414e9c08c7076e Mon Sep 17 00:00:00 2001 From: HHDocs Date: Thu, 25 Jan 2024 18:24:07 +0000 Subject: [PATCH] Deployed f45cebc to dev with MkDocs 1.5.3 and mike 1.1.2 --- dev/reference/api/index.html | 86 ++++++++++++++++++++++++++++++++++- dev/search/search_index.json | 2 +- dev/sitemap.xml.gz | Bin 441 -> 441 bytes 3 files changed, 85 insertions(+), 3 deletions(-) diff --git a/dev/reference/api/index.html b/dev/reference/api/index.html index a30995d..99a53fb 100644 --- a/dev/reference/api/index.html +++ b/dev/reference/api/index.html @@ -1259,6 +1259,13 @@ ConnBundled + + +
  • + + ConnESLAG + +
  • @@ -1504,6 +1511,13 @@ SwitchGroupStatus +
  • + +
  • + + SwitchRedundancy + +
  • @@ -2214,6 +2228,13 @@ ConnBundled +
  • + +
  • + + ConnESLAG + +
  • @@ -2459,6 +2480,13 @@ SwitchGroupStatus +
  • + +
  • + + SwitchRedundancy + +
  • @@ -3573,6 +3601,28 @@

    ConnBundled

    +

    ConnESLAG

    +

    ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

    +

    Appears in: +- ConnectionSpec

    + + + + + + + + + + + + + + + + + +
    FieldDescription
    links ServerToSwitchLink arrayLinks is the list of server-to-switch links
    mtu integerMTU is the MTU to be configured on the switch port or port channel

    ConnExternal

    ConnExternal defines the external connection (single switch to a single external device with a single link)

    Appears in: @@ -3650,7 +3700,7 @@

    ConnFabricLinkSwitch

    ConnMCLAG

    -

    ConnMCLAG defines the MCLAG connection (port channel, single server to multiple switches with multiple links)

    +

    ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

    Appears in: - ConnectionSpec

    @@ -3954,7 +4004,11 @@

    ConnectionSpec

    - + + + + + @@ -4188,6 +4242,7 @@

    ServerFacingConnectionConfig

    ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

    Appears in: - ConnBundled +- ConnESLAG - ConnMCLAG - ConnUnbundled

    mclag ConnMCLAGMCLAG defines the MCLAG connection (port channel, single server to multiple switches with multiple links)MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)
    eslag ConnESLAGESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)
    mclagDomain ConnMCLAGDomain
    @@ -4238,6 +4293,7 @@

    ServerToSwitchLink defines the server-to-switch link

    Appears in: - ConnBundled +- ConnESLAG - ConnMCLAG - ConnUnbundled

    @@ -4336,6 +4392,28 @@

    SwitchGroupStatus

    SwitchGroupStatus defines the observed state of SwitchGroup

    Appears in: - SwitchGroup

    +

    SwitchRedundancy

    +

    SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

    +

    Appears in: +- SwitchSpec

    +
    + + + + + + + + + + + + + + + + +
    FieldDescription
    name stringGroup is the name of the redundancy group switch belongs to
    type RedundancyTypeType is the type of the redundancy group, could be mclag or eslag

    SwitchRole

    Underlying type: string

    SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

    @@ -4380,6 +4458,10 @@

    SwitchSpec

    Groups is a list of switch groups the switch belongs to +redundancy SwitchRedundancy +Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections + + vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap diff --git a/dev/search/search_index.json b/dev/search/search_index.json index 08645e8..0cafe17 100644 --- a/dev/search/search_index.json +++ b/dev/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to the private environments. Without vendor lock-in.

    Fabric is built around concept of VPCs (Virtual Private Clouds) similar to the public clouds and provides a multi-tenant API to define user intent on network isolation, connectivity and etc which gets automatically transformed into switches and software appliances configuration.

    You can read more about concepts and architecture in the documentation.

    You can find how to download and try Fabric on the self-hosted fully virtualized lab or on the hardware.

    "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

    The Hedgehog Open Network Fabric is an open source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standar BGP EVPN and vxlan technology. The fabric provides a standard kubernetes interfaces to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this we define different abstractions starting from the physical network where we define Connection which defines how a physical server on the network connects to a physical switch on the fabric.

    "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

    The Hedgehog Fabric currently support two underlay network topologies.

    "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

    A collapsed core topology is just a pair of switches connected in a mclag configuration with no other network elements. All workloads attach to these two switches.

    The leaf's in this setup are configured to be in a mclag pair and servers can either be connected to both switches as a mclag port channel or as orphan ports connected to only one switch. both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

    "},{"location":"architecture/fabric/#spine-leaf","title":"Spine - Leaf","text":"

    A spine-leaf topology is a standard clos network with workloads attaching to leaf switches and spines providing connectivity between different leaves.

    This kind of topology is useful for bigger deployments and provides all the advantages of a typical clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. We used RFC7938 as the reference for establishing the underlay network.

    "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

    The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Vitualization is achieved in the hedgehog fabric by encapsulating workload traffic over vxlan tunnels that are source and terminated on the leaf switches in the network. The fabric using BGP-EVPN/Vxlan to enable creation and management of virtual networks on top of the virtual. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the hedgehog fabric is identified by a VPC. In the following sections we will dive a bit deeper into a high level overview of how are vpc's implemented in the hedgehog fabric and it's associated objects.

    "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

    We know what is a VPC and how to attach workloads to a specific VPC. Let us now take a look at how is this actually implemented on the network to provice the view of a private network.

    "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

    To enable communication between 2 different VPC's we need to configure a VPC peering policy. The hedgehog fabric supports two different peering modes.

    "},{"location":"architecture/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

    Hedgehog Open Network Fabric is build on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs) and so you can use standard Kubernetes tools to manage Fabric resources.

    Hedgehog Fabric consists of the following components:

    "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

    All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. It allows to define switches, servers, control nodes, external systems and connections between them in a single place and then use it to deploy and manage the whole infrastructure. On top of it Fabric provides a set of APIs to manage the VPCs and connections between them and between VPCs and External systems.

    "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

    Wiring Diagram consists of the following resources:

    "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":""},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

    Installer builder and VLAB.

    "},{"location":"concepts/overview/#das-boot","title":"Das Boot","text":"

    Switch boot and installation.

    "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

    Control plane and switch agent.

    "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

    This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

    In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

    make serve\n

    Now you can open continuosly updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

    Additionally you can run

    make build\n

    to make sure that your changes will be built correctly and doesn't break documentation.

    "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

    If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

    Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

    Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

    "},{"location":"contribute/docs/#repository","title":"Repository","text":"

    Documentation is organized in per-release branches:

    Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

    "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

    All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

    For example, top-level .pages in this repository looks like this:

    nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

    Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

    More details in the MkDocs Pages plugin.

    "},{"location":"contribute/docs/#abbreaviations","title":"Abbreaviations","text":"

    You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

    For example, we have following in includes/abbreviations.md:

    *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

    It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

    "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

    We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

    To view code for examples, please, check the source code of this page.

    "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

    Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

    Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

    Keyboard keys can be written like so:

    Ctrl+Alt+Del

    Amd inline icons/emojis can be added like this:

    :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

    "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

    Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

    Lorem ipsum

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

    "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

    Details can be found here.

    Simple code block with line nums and higlighted lines:

    bubble_sort.py
    def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

    Code annotations:

    theme:\n  features:\n    - content.code.annotate # (1)\n
    1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
    "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

    You can use Tabs to better organize content.

    CC++
    #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
    #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
    "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

    You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

    graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
    sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
    "},{"location":"contribute/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"getting-started/download/","title":"Download","text":""},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

    Prior to the General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

    After that you will be provided with the credentials to access the software on GitHub Package. In order to use it you need to login to the registry using the following command:

    docker login ghcr.io\n
    "},{"location":"getting-started/download/#downloading-the-software","title":"Downloading the software","text":"

    The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. All software is published into the OCI registry GitHub Package including binaries, container images, helm charts and etc. The latest stable hhfab binary can be downloaded from the GitHub Package using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | bash\n

    Or you can download a specific version using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-X bash\n

    The VERSION environment variable can be used to specify the version of the software to download. If it's not specified the latest release will be downloaded. You can pick specific release series (e.g. alpha-2) or specific release.

    It requires ORAS to be installed which is used to download the binary from the OCI registry and could be installed using following command:

    curl -fsSL https://i.hhdev.io/oras | bash\n

    Currently only Linux x86 is supported for running hhfab.

    "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":""},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

    Under construction.

    In the meantime, to have a look at the working wiring diagram for the Hedgehog Fabric, please run sample generator that produces VLAB-compatible wiring diagrams:

    ubuntu@sl-dev:~$ hhfab wiring sample -h\nNAME:\n   hhfab wiring sample - sample wiring diagram (would work for vlab)\n\nUSAGE:\n   hhfab wiring sample [command options] [arguments...]\n\nOPTIONS:\n   --brief, -b                    brief output (only warn and error) (default: false)\n   --fabric-mode value, -m value  fabric mode (one of: collapsed-core, spine-leaf) (default: \"spine-leaf\")\n   --help, -h                     show help\n   --verbose, -v                  verbose output (includes debug) (default: false)\n\n   wiring generator options:\n\n   --chain-control-link         chain control links instead of all switches directly connected to control node if fabric mode is spine-leaf (default: false)\n   --control-links-count value  number of control links if chain-control-link is enabled (default: 0)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
    "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":"

    You can find more information about using hhfab init in the help message by running it with --help flag.

    "},{"location":"install-upgrade/onie-update/","title":"ONIE Update/Upgrade","text":""},{"location":"install-upgrade/onie-update/#hedgehog-onie-honie-supported-systems","title":"Hedgehog ONIE (HONIE) Supported Systems","text":""},{"location":"install-upgrade/onie-update/#updating-onie","title":"Updating ONIE","text":""},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

    Under construction.

    "},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":""},{"location":"install-upgrade/overview/#main-steps","title":"Main steps","text":"

    This chapter is dedicated to the Hedgehog Fabric installation on the bare-metal control node(s) and switches, their preparation and configuration.

    Please, get hhfab installed following instructions from the Download section.

    Main steps to install Fabric are:

    1. Install hhfab on the machines with access to internet
      1. Prepare Wiring Diagram
      2. Select Fabric Configuration
      3. Build Control Node configuration and installer
    2. Install Control Node
      1. Install Flatcar Linux on the Control Node
      2. Upload and run Control Node installer on the Control Node
    3. Prepare supported switches
      1. Install Hedgehog ONiE (HONiE) on them
      2. Reboot them into ONiE Install Mode and they will be automatically provisioned
    "},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and installer","text":"

    It's the only step that requires internet access to download artifacts and build installer.

    Once you've prepated Wiring Diagram, you can initialize Fabricator by running hhfab init command and passwing optional configuration into it as well as wiring diagram file(s) as flags. Additionally, there are a lot of customizations availble as flags, e.g. to setup default credentials, keys and etc, please, refer to hhfab init --help for more.

    The --dev options allows to enable development mode which will enable default credentials and keys for the Control Node and switches:

    Alternatively, you can pass your own credentials and keys using --authorized-key and --control-password-hash flags. Password hash can be generated using openssl passwd -5 command. Further customizations are available in the config file that could be passed using --config flag.

    hhfab init --preset lab --dev --wiring file1.yaml --wiring file2.yaml\nhhfab build\n

    As a result, you will get the following files in the .hhfab directory or the one you've passed using --basedir flag:

    "},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

    It's fully air-gapped installation and doesn't require internet access.

    Please, download latest stable Flatcar Container Linux ISO from the link and boot into it (using IPMI attaching media, USB stick or any other way).

    Once you've booted into the Flatcar installer, you need to download ignition.json built in the prvious step to it and run Flatcar installation:

    sudo flatcar-install -d /dev/sda -i ignition.json\n

    Where /dev/sda is a disk you want to install Control Node to and ignition.json is the control-os/ignition.json file from previous step downloaded to the Flatcar installer.

    Once the installation is finished, reboot the machine and wait for it to boot into the installed Flatcar Linux.

    At that point, you should get into the installed Flatcar Linux using the dev or provided credentials with user core and you can now install Hedgehog Open Network Fabric on it. Download control-install.tgz to the just installed Control Node (e.g. by using scp) and run it.

    tar xzf control-install.tgz && cd control-install && sudo ./hhfab-recipe run\n

    It'll output log of installing the Fabric (including Kubernetes cluster, OCI registry misc components and etc), you should see following output in the end:

    ...\n01:34:45 INF Running name=reloader-image op=\"push fabricator/reloader:v1.0.40\"\n01:34:47 INF Running name=reloader-chart op=\"push fabricator/charts/reloader:1.0.40\"\n01:34:47 INF Running name=reloader-install op=\"file /var/lib/rancher/k3s/server/manifests/hh-reloader-install.yaml\"\n01:34:47 INF Running name=reloader-wait op=\"wait deployment/reloader-reloader\"\ndeployment.apps/reloader-reloader condition met\n01:35:15 INF Done took=3m39.586394608s\n

    At that point, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s preinstalled as part of the Control Node installer.

    You can now get HONiE installed on your switches and reboot them into ONiE Install Mode and they will be automatically provisioned from the Control Node.

    "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":" Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":" Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":" Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":""},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":""},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":""},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":""},{"location":"reference/api/#agentgithedgehogcomv1alpha2","title":"agent.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the agent v1alpha2 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types","title":"Resource Types","text":""},{"location":"reference/api/#agent","title":"Agent","text":"

    Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

    Field Description apiVersion string agent.githedgehog.com/v1alpha2 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec AgentSpec Spec is the desired state of the Agent status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

    AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

    Appears in: - Agent

    Field Description version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application nosInfo NOSInfo Information about the switch and NOS statusUpdates ApplyStatusUpdate array Status updates from the agent conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#nosinfo","title":"NOSInfo","text":"

    NOSInfo contains information about the switch and NOS received from the switch itself by the agent

    Appears in: - AgentStatus

    Field Description asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" upTime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#dhcpgithedgehogcomv1alpha2","title":"dhcp.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the dhcp v1alpha2 API group. It is the primarely internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":""},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

    DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

    Appears in: - DHCPSubnetStatus

    Field Description ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

    DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primarely internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

    Field Description apiVersion string dhcp.githedgehog.com/v1alpha2 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

    DHCPSubnetSpec defines the desired state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet withing the VPC, such as \"Vlan1000\" as it's named on switch"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

    DHCPSubnetStatus defines the observed state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1alpha2","title":"vpc.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the vpc v1alpha2 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":""},{"location":"reference/api/#external","title":"External","text":"

    External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

    ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

    ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

    ExternalAttachmentSpec defines the desired state of ExternalAttachment

    Appears in: - AgentSpec - ExternalAttachment

    Field Description external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentialy the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

    ExternalAttachmentStatus defines the observed state of ExternalAttachment

    Appears in: - ExternalAttachment

    "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

    ExternalAttachmentSwitch defines the switch port configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description vlan integer VLAN is the VLAN ID used for the subinterface on a switch port specified in the connection ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

    ExternalPeering is the Schema for the externalpeerings API

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

    ExternalPeeringSpec defines the desired state of ExternalPeering

    Appears in: - AgentSpec - ExternalPeering

    Field Description permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

    ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

    ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

    Appears in: - ExternalPeeringSpec

    Field Description vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

    ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

    Appears in: - ExternalPeeringSpecExternal

    Field Description prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for default route ge integer Ge is the minimum prefix length to permit from the External to the VPC, e.g. 24 for /24 le integer Le is the maximum prefix length to permit from the External to the VPC, e.g. 32 for /32"},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

    ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

    ExternalPeeringStatus defines the observed state of ExternalPeering

    Appears in: - ExternalPeering

    "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

    ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

    Appears in: - AgentSpec - External

    Field Description ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the name of the inbound community to filter routes from the external system outboundCommunity string OutboundCommunity is the name of the outbound community that all outbound routes will be stamped with"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

    ExternalStatus defines the observed state of External

    Appears in: - External

    "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

    IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets withing a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

    IPv4NamespaceSpec defines the desired state of IPv4Namespace

    Appears in: - AgentSpec - IPv4Namespace

    Field Description subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

    IPv4NamespaceStatus defines the observed state of IPv4Namespace

    Appears in: - IPv4Namespace

    "},{"location":"reference/api/#vpc","title":"VPC","text":"

    VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

    VPCAttachment is the Schema for the vpcattachments API

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

    VPCAttachmentSpec defines the desired state of VPCAttachment

    Appears in: - AgentSpec - VPCAttachment

    Field Description subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

    VPCAttachmentStatus defines the observed state of VPCAttachment

    Appears in: - VPCAttachment

    "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

    VPCDHCP defines the on-demand DHCP configuration for the subnet

    Appears in: - VPCSubnet

    Field Description relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range is the DHCP range for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

    Underlying type: struct{Start string \"json:\\\"start,omitempty\\\"\"; End string \"json:\\\"end,omitempty\\\"\"}

    VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    "},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

    Appears in: - VPCPeeringSpec

    Field Description subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

    VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed: spec: permit: - vpc-1: {} vpc-2: {}

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

    VPCPeeringSpec defines the desired state of VPCPeering

    Appears in: - AgentSpec - VPCPeering

    Field Description remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets."},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

    VPCPeeringStatus defines the observed state of VPCPeering

    Appears in: - VPCPeering

    "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

    VPCSpec defines the desired state of VPC. At least one subnet is required.

    Appears in: - AgentSpec - VPC

    Field Description subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

    VPCStatus defines the observed state of VPC

    Appears in: - VPC

    Field Description vni integer VNI is the global Fabric-level VNI allocated for the VPC subnetVNIs object (keys:string, values:integer) SubnetVNIs is the map of subnet names to the global Fabric-level VNIs allocated for the VPC subnets"},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

    VPCSubnet defines the VPC subnet configuration

    Appears in: - VPCSpec

    Field Description subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan string VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace"},{"location":"reference/api/#wiringgithedgehogcomv1alpha2","title":"wiring.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the wiring v1alpha2 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":""},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

    BasePortName defines the full name of the switch port

    Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnMgmtLinkServer - ConnMgmtLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

    ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

    Appears in: - ConnectionSpec

    Field Description links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

    ConnExternal defines the external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

    ConnExternalLink defines the external connection link

    Appears in: - ConnExternal

    Field Description switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

    ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

    Appears in: - ConnectionSpec

    Field Description links FabricLink array Links is the list of spine-to-leaf links"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

    ConnFabricLinkSwitch defines the switch side of the fabric link

    Appears in: - FabricLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration)"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

    ConnMCLAG defines the MCLAG connection (port channel, single server to multiple switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

    ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

    Appears in: - ConnectionSpec

    Field Description peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGP traffic between switches"},{"location":"reference/api/#connmgmt","title":"ConnMgmt","text":"

    ConnMgmt defines the management connection (single control node/server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnMgmtLink"},{"location":"reference/api/#connmgmtlink","title":"ConnMgmtLink","text":"

    ConnMgmtLink defines the management connection link

    Appears in: - ConnMgmt

    Field Description server ConnMgmtLinkServer Server is the server side of the management link switch ConnMgmtLinkSwitch Switch is the switch side of the management link"},{"location":"reference/api/#connmgmtlinkserver","title":"ConnMgmtLinkServer","text":"

    ConnMgmtLinkServer defines the server side of the management link

    Appears in: - ConnMgmtLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the server side of the management link (control node port configuration) mac string MAC is an optional MAC address of the control node port for the management link, if specified will be used to create a \"virtual\" link with the connection names on the control node"},{"location":"reference/api/#connmgmtlinkswitch","title":"ConnMgmtLinkSwitch","text":"

    ConnMgmtLinkSwitch defines the switch side of the management link

    Appears in: - ConnMgmtLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the management link (switch port configuration) oniePortName string ONIEPortName is an optional ONIE port name of the switch side of the management link that's only used by the IPv6 Link Local discovery"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

    ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnStaticExternalLink Link is the static external connection link"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

    ConnStaticExternalLink defines the static external connection link

    Appears in: - ConnStaticExternal

    Field Description switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

    ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

    Appears in: - ConnStaticExternalLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

    ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

    ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

    Appears in: - ConnectionSpec

    Field Description links SwitchToSwitchLink array Links is the list of VPC loopback links"},{"location":"reference/api/#connection","title":"Connection","text":"

    Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

    ConnectionSpec defines the desired state of Connection

    Appears in: - AgentSpec - Connection

    Field Description unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) management ConnMgmt Management defines the management connection (single control node/server to a single switch with a single link) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to multiple switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

    ConnectionStatus defines the observed state of Connection

    Appears in: - Connection

    "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

    FabricLink defines the fabric connection link

    Appears in: - ConnFabric

    Field Description spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#location","title":"Location","text":"

    Location defines the geopraphical position of the device in a datacenter

    Appears in: - SwitchSpec

    Field Description location string aisle string row string rack string slot string"},{"location":"reference/api/#locationsig","title":"LocationSig","text":"

    LocationSig contains signatures for the location UUID as well as the device location itself

    Appears in: - SwitchSpec

    Field Description sig string uuidSig string"},{"location":"reference/api/#rack","title":"Rack","text":"

    Rack is the Schema for the racks API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Rack metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec RackSpec status RackStatus"},{"location":"reference/api/#rackposition","title":"RackPosition","text":"

    RackPosition defines the geopraphical position of the rack in a datacenter

    Appears in: - RackSpec

    Field Description location string aisle string row string"},{"location":"reference/api/#rackspec","title":"RackSpec","text":"

    RackSpec defines the properties of a rack which we are modelling

    Appears in: - Rack

    Field Description numServers integer hasControlNode boolean hasConsoleServer boolean position RackPosition"},{"location":"reference/api/#rackstatus","title":"RackStatus","text":"

    RackStatus defines the observed state of Rack

    Appears in: - Rack

    "},{"location":"reference/api/#server","title":"Server","text":"

    Server is the Schema for the servers API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

    ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

    Appears in: - ConnBundled - ConnMCLAG - ConnUnbundled

    Field Description mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

    ServerSpec defines the desired state of Server

    Appears in: - Server

    Field Description type ServerType Type is the type of server, could be control for control nodes or default (empty string) for everything else description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

    ServerStatus defines the observed state of Server

    Appears in: - Server

    "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

    ServerToSwitchLink defines the server-to-switch link

    Appears in: - ConnBundled - ConnMCLAG - ConnUnbundled

    Field Description server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#servertype","title":"ServerType","text":"

    Underlying type: string

    ServerType is the type of server, could be control for control nodes or default (empty string) for everything else

    Appears in: - ServerSpec

    "},{"location":"reference/api/#switch","title":"Switch","text":"

    Switch is the Schema for the switches API All switches should always have 1 labels defined: wiring.githedgehog.com/rack. It represents name of the rack it belongs to.

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

    SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

    SwitchGroupSpec defines the desired state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

    SwitchGroupStatus defines the observed state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

    Underlying type: string

    SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

    Appears in: - AgentSpec - SwitchSpec

    "},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

    SwitchSpec defines the desired state of Switch

    Appears in: - AgentSpec - Switch

    Field Description role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric location Location Location is the location of the switch, it is used to generate the location UUID and location signature locationSig LocationSig LocationSig is the location signature for the switch groups string array Groups is a list of switch groups the switch belongs to vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\""},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

    SwitchStatus defines the observed state of Switch

    Appears in: - Switch

    "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

    SwitchToSwitchLink defines the switch-to-switch link

    Appears in: - ConnMCLAGDomain - ConnVPCLoopback

    Field Description switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

    VLANNamespace is the Schema for the vlannamespaces API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

    VLANNamespaceSpec defines the desired state of VLANNamespace

    Appears in: - AgentSpec - VLANNamespace

    Field Description ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

    VLANNamespaceStatus defines the observed state of VLANNamespace

    Appears in: - VLANNamespace

    "},{"location":"reference/cli/","title":"Fabric CLI","text":"

    Under construction.

    Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

    core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
    "},{"location":"reference/cli/#vpc","title":"VPC","text":"

    Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

    core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

    Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

    core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

    To peer VPC with another VPC (e.g. vpc-2) use the following command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
    "},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"

    Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)

    "},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":""},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":""},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":""},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":""},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

    Support for 3rd party DHCP server (DHCP Relay config) through the API

    "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

    A single controller. No controller redundancy.

    "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

    For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

    Spine controller connectivity is not supported.

    For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

    Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

    "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":""},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

    Switches not directly connecting to the controllers can chain boot via the data network.

    "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

    CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

    "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

    server leaf, border leaf, and mixed leaf modes are supported.

    "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

    Two ToR/LEAF switches with MCLAG server connection.

    "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

    MCLAG-only.

    "},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":""},{"location":"release-notes/#spines","title":"SPINEs","text":""},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

    Port speed, port group speed, port breakouts are configurable through the API

    "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

    VXLAN-based BGP eVPN.

    "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

    A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

    "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

    Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

    "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

    VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

    This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

    "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

    Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

    "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

    VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

    "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

    VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

    The user controls what subnets/prefixes to import and export from/to the external system.

    No NAT function is supported for external peering.

    "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

    Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

    "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

    VPC is provided with an optional DHCP service with simple IPAM

    "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

    To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

    "},{"location":"release-notes/#scale","title":"Scale","text":""},{"location":"release-notes/#software-versions","title":"Software versions","text":""},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":""},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":""},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

    Under construction.

    "},{"location":"user-guide/connections/","title":"Connections","text":"

    The Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all connections between the devices in the Wiring Diagram.

    There are multiple types of connections.

    "},{"location":"user-guide/connections/#server-connections-user-facing","title":"Server connections (user-facing)","text":"

    Server connections are used to connect workload servers to the switches.

    "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

    Unbundled server connections are used to connect servers to the single switche using a single port.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
    "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

    Bundled server connections are used to connect servers to the single switch using multiple ports (port channel, LAG).

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

    MCLAG server connections are used to connect servers to the pair of switches using multiple ports (Dual-homing).

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

    Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

    "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

    Connections between specific spine and leaf, covers all actual wires between a single pair.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

    Used to define a pair of MCLAG switches with Session and Peer link between them.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
    "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

    Required to implement a workaround for the local VPC peering (when both VPC are attached to the same switch) which is caused by the hardware limitation of the currently supported switches.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
    "},{"location":"user-guide/connections/#management","title":"Management","text":"

    Connection to the Control Node.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: control-1--mgmt--s5248-01-front\n  namespace: default\nspec:\n  management:\n    link: # Defines a single link between a control node and a switch\n      server:\n        ip: 172.30.20.0/31\n        port: control-1/enp2s1\n      switch:\n        ip: 172.30.20.1/31\n        port: s5248-01/Ethernet0\n
    "},{"location":"user-guide/connections/#connecting-fabric-to-outside-world","title":"Connecting Fabric to outside world","text":"

    Provides connectivity to the outside world, e.g. internet, other networks or some other systems such as DHCP, NTP, LMA, AAA services.

    "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

    Simple way to connect things like DHCP server directly to the Fabric by connecting it to specific switch ports.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port, if 0 - no VLAN is configured\n        subnets: # List of subnets that will be routed to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address that will be used when configuring static routes for the \"subnets\" list\n
    "},{"location":"user-guide/connections/#external","title":"External","text":"

    Connection to the external systems, e.g. edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what's getting advertised and which routes are accepted.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
    "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

    All devices in the Hedgehog Fabric are divided into two groups: switches and servers and represented by corresponding Switch and Server objects in the API. It's needed to define all participants of the Fabric and their roles in the Wiring Diagram as well as Connections between them.

    "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

    Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API and consists of the basic information like name, description, location, role, etc. as well as port group speeds, port breakouts, ASN, IP addresses and etc.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  location:\n    location: gen--default--s5248-01\n  locationSig:\n    sig: <undefined>\n    uuidSig: <undefined>\n  portBreakouts: # Configures port breakouts for the switch\n    1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch\n    \"1\": 10G\n    \"2\": 10G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to\n  - some-group\n

    The SwitchGroup is just a marker at that point and doesn't have any configuration options.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
    "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

    It includes both control nodes and user's workload servers.

    Control Node:

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: control-1\n  namespace: default\nspec:\n  type: control # Type of the server, one of control or \"\" (empty) for regular workload server\n

    Regular workload server:

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
    "},{"location":"user-guide/external/","title":"External Peering","text":"

    Hedgehog Fabric uses Border Leaf concept to exchange VPC routes outside the Fabric and providing L3 connectivity. External Peering feature allows to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

    Hedgehog Fabric does not operate Edge side devices.

    "},{"location":"user-guide/external/#overview","title":"Overview","text":"

    Traffic exit from the Fabric is done on Border Leafs that are connected with Edge devices. Border Leafs are suitable to terminate l2vpn connections and distinguish VPC L3 routable traffic towards Edge device as well as to land VPC servers. Border Leafs (or Borders) can connect to several Edge devices.

    External Peering is only available on the switch devices that are capable for sub-interfaces.

    "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

    In order to distinguish VPC traffic Edge device should be capable for - Set up BGP IPv4 to advertise and receive routes from the Fabric - Connect to Fabric Border Leaf over Vlan - Be able to mark egress routes towards the Fabric with BGP Communities - Be able to filter ingress routes from the Fabric by BGP Communities

    All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

    "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

    Fabric is sharing VPC routes with Edge devices via BGP. Peering is done over Vlan in IPv4 Unicast AFI/SAFI.

    "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

    VPC L3 routable traffic will be tagged with Vlan and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

    "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

    Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that VPC can export to Edge devices and import from the Edge devices.

    "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

    General configuration starts with specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with following config

    Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
    "},{"location":"user-guide/external/#connection","title":"Connection","text":"

    Connection of type external is used to identify switch port on Border leaf that is cabled with an Edge device.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwtichName/EthernetXXX\n
    "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

    External Attachment is a definition of BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to Connection with type external and specify Vlan that will be used to segregate particular Edge peering.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP Address on the Border Leaf to set up BGP peering\n    vlan: # Vlan ID to tag control and data traffic\n

    Several External Attachment can be configured for the same Connection but for different vlan.

    "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

    To allow specific VPC have access to Edge devices VPC should be bound to specific External object. This is done via External Peering object.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes(routes) to be allowed to pick up from External\n      - # IPv4 Prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n
    Prefixes can be specified as exact match or with mask range indicators le and ge keywords. le is identifying prefixes lengths that are less than or equal and ge for prefixes lengths that are greater than or equal.

    Example: Allow ANY IPv4 prefix that came from External - allow all prefixes that match default route with any prefix length

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n
    ge and le can also be combined.

    Example:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 24\n        ge: 16\n        prefix: 77.0.0.0/8\n
    For instance, 77.42.0.0/18 will be matched for given prefix rule above, but 77.128.77.128/25 or 77.10.0.0/16 won't.

    "},{"location":"user-guide/external/#examples","title":"Examples","text":"

    This example will show peering with External object with name HedgeEdge given Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable between an Edge device on the port Ethernet42. vpc-1 is required to receive any prefixes advertised from the External.

    "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"

    # hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n

    "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

    Connection should be specified in the wiring diagram.

    ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
    "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

    Specified in wiring diagram

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n

    "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
    "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

    NOTE: Hedgehog does not recommend using following configuration for production. It's just as example of Edge Peer config

    Interface config

    interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

    BGP Config

    !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n
    Route Map configuration
    route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n

    "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

    It's an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

    You'll need to define Server object per each server running Harvester and Connection object per each server connection to the switches.

    You can have multiple VPCs created and attached to the Connections to this servers to make them available to the VMs in Harvester or any other hypervisor.

    "},{"location":"user-guide/harvester/#congigure-harvester","title":"Congigure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

    From the \"Cluster Network/Confg\" side menu. Create a new Cluster Network.

    Here is what the CRD looks like cleaned up:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
    "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

    By clicking \"Create Network Confg\". Add your connections and select bonding type.

    The resulting cleaned up CRD:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
    "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

    Browse over to \"VM Networks\" and add one for each Vlan you want to support, assigning them to the cluster network.

    Here is what the CRDs will look like for both vlans:

    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
    "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

    Now you can choose created VM Networks when creating a VM in Harvester and have them created as part of the VPC.

    "},{"location":"user-guide/overview/","title":"Overview","text":"

    The chapter is intended to give an overview of the main features of the Hedgehog Fabric and their usage.

    "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

    Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits to which subnets could be used by VPC to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits to which switches VPC could be attached to guarantee non-overlapping VLANs\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified\n          start: 10.10.1.10\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      vlan: \"1001\" # User-defined VLAN from vlan namespace\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: \"1002\"\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: \"1100\"\n

    In case if you're using thirt-party DHCP server by configuring spec.subnets.<subnet>.dhcp.relay additional information will be added to the DHCP packet it forwards to the DHCP server to make it possible to identify the VPC and subnet. The information is added under the RelayAgentInfo option(82) on the DHCP packet. The relay sets two suboptions in the packet

    "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

    Represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

    VPC could be attached to a switch which is a part of the VLAN namespace used by the VPC.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n
    "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

    It enables VPC to VPC connectivity. There are tw o types of VPC peering:

    VPC peering is only possible between VPCs attached to the same IPv4 namespace.

    Local:

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-3\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # meaning all subnets of two VPCs will be able to communicate to each other\n    vpc-3: {} # more advanced filtering will be supported in future releases\n

    Remote:

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # indicates a switch group to implement the peering on\n
    "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

    Defines non-overlapping VLAN ranges for attaching servers. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of the subnets that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
    "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

    Defines non-overlapping IPv4 ranges for VPC subnets. Each VPC belongs to a specific IPv4 namespace.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
    "},{"location":"vlab/demo/","title":"Demo on VLAB","text":"

    Goal of this demo is to show how to use VPCs, attach and peer them and test connectivity between the servers. Examples are based on the default VLAB topology.

    You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

    "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

    The default topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

    For more details on the customizing topologies see Running VLAB section.

    In the default topology, the following Control Node and Switch VMs are created:

    graph TD\n    CN[Control Node]\n\n    S1[Spine 1]\n    S2[Spine 2]\n\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    CN --> L1\n    CN --> L2\n\n    S1 --> L1\n    S1 --> L2\n    S2 --> L2\n    S2 --> L3

    As well as test servers:

    graph TD\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    TS1[Test Server 1]\n    TS2[Test Server 2]\n    TS3[Test Server 3]\n    TS4[Test Server 4]\n    TS5[Test Server 5]\n    TS6[Test Server 6]\n\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS3 --> L1\n    TS4 --> L2\n\n    TS5 --> L3\n    TS6 --> L4
    "},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

    You can create and attach VPCs to the VMs using the kubectl fabric vpc command on control node or outside of cluster using the kubeconfig. For example, run following commands to create a 2 VPCs with a single subnet each, DHCP server enabled with optional IP address range start defined and attach them to some test servers:

    core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

    VPC subnet should belong to some IPv4Namespace, default one in the VLAB is 10.0.0.0/16:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

    After you created VPCs and VPCAttachments, you can check the status of the agents to make sure that requested configuration was apploed to the switches:

    core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

    As you can see columns APPLIED and APPLIEDG are equal which means that requested configuration was applied.

    "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

    You can use hhfab vlab ssh on the host to ssh into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure bond with a vlan on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a vlan and they both will get an IP address from the DHCP server. You can use ip command to configure networking on the servers or use little helper preinstalled by Fabricator on test servers.

    For server-01:

    core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

    And for server-02:

    core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
    "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

    You can test connectivity between the servers before peering the switches using ping command:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
    "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

    To enable connectivity between the VPCs, you need to peer them using kubectl fabric vpc peer command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

    Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that you can test connectivity between the servers again:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

    If you will delete VPC peering using command following command and wait for the agent to apply configuration on the switches, you will see that connectivity will be lost again:

    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

    You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

    core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
    "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

    First of all, we'll need to make sure that we have a second IPv4Namespace with the same subnet as default one:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

    Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

    core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: \"2001\"\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

    At that point you can setup networking on the server-03 same as for server-01 and server-02 in a previous sections and see that we have now server-01 and server-03 with the IP addresses from the same subnets.

    "},{"location":"vlab/overview/","title":"Overview","text":"

    It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its looka and feel, API, capabilities and etc. It's not suitable for any data plane or performance testing as well as not for production use.

    In the VLAB all switches will start as an empty VMs with only ONiE image on them and will go through the whole discovery, boot and installation process like on the real hardware.

    "},{"location":"vlab/overview/#overview_1","title":"Overview","text":"

    The hhfab CLI provides a special command vlab to manage the virtual labs. It allows to run set of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and automatically runs installer to get Fabric up and running.

    You can find more information about getting hhfab in the download section.

    "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

    Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

    Following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat and docker will be required to login into OCI registry.

    By default, VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

    You can calculate the system requirements based on the allocated resources to the VMs using the following table:

    Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

    Which gives approximately the following requirements for the default topologies:

    Usually, non of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

    NVMe SSD for VM disks is highly recommended.

    "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing prerequisites","text":"

    On Ubuntu 22.04 LTS you can install all required packages using the following commands:

    curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
    sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

    Good output of the kvm-ok command should look like this:

    ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
    "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":""},{"location":"vlab/running/","title":"Running VLAB","text":"

    Please, make sure to follow prerequisites and check system requirements in the VLAB Overview section before running VLAB.

    "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

    As a first step you need to initialize Fabricator for the VLAB by running hhfab init --preset vlab (or -p vlab). It supports a lot of customization options which you can find by adding --help to the command. If you want to tune the topology used for the VLAB you can use --fabric-mode (or -m) flag to choose between spine-leaf (default) and collapsed-core topologies as well as you can configure the number of spines, leafs, connections and etc. For example, --spines-count and --mclag-leafs-count flags allows to set number of spines and MCLAG leafs respectively.

    So, by default you'll get 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC Loopback workaround.

    ubuntu@docs:~$ hhfab init -p vlab\n01:17:44 INF Generating wiring from gen flags\n01:17:44 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:17:44 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:17:44 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=1\n01:17:44 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:17:44 INF                     >>> vpcLoopbacks=2\n01:17:44 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:17:44 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Or if you want to run Collapsed Core topology with 2 MCLAG switches:

    ubuntu@docs:~$ hhfab init -p vlab -m collapsed-core\n01:20:07 INF Generating wiring from gen flags\n01:20:07 INF Building wiring diagram fabricMode=collapsed-core chainControlLink=false controlLinksCount=0\n01:20:07 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=0\n01:20:07 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:20:07 INF                     >>> vpcLoopbacks=2\n01:20:07 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:20:07 INF Initialized preset=vlab fabricMode=collapsed-core config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Or you can run custom topology with 2 spines, 4 MCLAG leafs and 2 non-MCLAG leafs using flags:

    ubuntu@docs:~$ hhfab init -p vlab --mclag-leafs-count 4 --orphan-leafs-count 2\n01:21:53 INF Generating wiring from gen flags\n01:21:53 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:21:53 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:21:53 INF                     >>> mclagLeafsCount=4 orphanLeafsCount=2\n01:21:53 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:21:53 INF                     >>> vpcLoopbacks=2\n01:21:53 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:21:53 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Additionally, you can do extra Fabric configuration using flags on init command or by passing config file, more information about it is available in the Fabric Configuration section.

    Once you have initialized the VLAB you need to download all artifacts and build the installer using hhfab build command. It will automatically download all required artifacts from the OCI registry and build the installer as well as all other prerequisites for running the VLAB.

    "},{"location":"vlab/running/#build-the-installer-and-vlab","title":"Build the installer and VLAB","text":"
    ubuntu@docs:~$ hhfab build\n01:23:33 INF Building component=base\n01:23:33 WRN Attention! Development mode enabled - this is not secure! Default users and keys will be created.\n...\n01:23:33 INF Building component=control-os\n01:23:33 INF Building component=k3s\n01:23:33 INF Downloading name=m.l.hhdev.io:31000/githedgehog/k3s:v1.27.4-k3s1 to=.hhfab/control-install\nCopying k3s-airgap-images-amd64.tar.gz  187.36 MiB / 187.36 MiB   \u2819   0.00 b/s done\nCopying k3s                               56.50 MiB / 56.50 MiB   \u2819   0.00 b/s done\n01:23:35 INF Building component=zot\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/zot:v1.4.3 to=.hhfab/control-install\nCopying zot-airgap-images-amd64.tar.gz  19.24 MiB / 19.24 MiB   \u2838   0.00 b/s done\n01:23:35 INF Building component=misc\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/fabricator/k9s:v0.27.4 to=.hhfab/control-install\nCopying k9s  57.75 MiB / 57.75 MiB   \u283c   0.00 b/s done\n...\n01:25:40 INF Planned bundle=control-install name=fabric-api-chart op=\"push fabric/charts/fabric-api:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-image op=\"push fabric/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-chart op=\"push fabric/charts/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent-seeder op=\"push fabric/agent/x86_64:latest\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent op=\"push fabric/agent:v0.23.0\"\n...\n01:25:40 INF Recipe created bundle=control-install actions=67\n01:25:40 INF Creating recipe bundle=server-install\n01:25:40 INF Planned bundle=server-install name=toolbox op=\"file /opt/hedgehog/toolbox.tar\"\n01:25:40 INF Planned bundle=server-install name=toolbox-load op=\"exec ctr\"\n01:25:40 INF Planned bundle=server-install name=hhnet op=\"file /opt/bin/hhnet\"\n01:25:40 INF Recipe created bundle=server-install actions=3\n01:25:40 INF Building done took=2m6.813384532s\n01:25:40 INF Packing bundle=control-install target=control-install.tgz\n01:25:45 INF Packing bundle=server-install target=server-install.tgz\n01:25:45 INF Packing done took=5.67007384s\n

    As soon as it's done you can run the VLAB using hhfab vlab up command. It will automatically start all VMs and run the installers on the control node and test servers. It will take some time for all VMs to come up and for the installer to finish, you will see the progress in the output. If you stop the command, it'll stop all VMs, and you can re-run it to get VMs back up and running.

    "},{"location":"vlab/running/#run-vms-and-installers","title":"Run VMs and installers","text":"
    ubuntu@docs:~$ hhfab vlab up\n01:29:13 INF Starting VLAB server... basedir=.hhfab/vlab-vms vm-size=\"\" dry-run=false\n01:29:13 INF VM id=0 name=control-1 type=control\n01:29:13 INF VM id=1 name=server-01 type=server\n01:29:13 INF VM id=2 name=server-02 type=server\n01:29:13 INF VM id=3 name=server-03 type=server\n01:29:13 INF VM id=4 name=server-04 type=server\n01:29:13 INF VM id=5 name=server-05 type=server\n01:29:13 INF VM id=6 name=server-06 type=server\n01:29:13 INF VM id=7 name=leaf-01 type=switch-vs\n01:29:13 INF VM id=8 name=leaf-02 type=switch-vs\n01:29:13 INF VM id=9 name=leaf-03 type=switch-vs\n01:29:13 INF VM id=10 name=spine-01 type=switch-vs\n01:29:13 INF VM id=11 name=spine-02 type=switch-vs\n01:29:13 INF Total VM resources cpu=\"38 vCPUs\" ram=\"36352 MB\" disk=\"410 GB\"\n...\n01:29:13 INF Preparing VM id=0 name=control-1 type=control\n01:29:13 INF Copying files  from=.hhfab/control-os/ignition.json to=.hhfab/vlab-vms/control-1/ignition.json\n01:29:13 INF Copying files  from=.hhfab/vlab-files/flatcar.img to=.hhfab/vlab-vms/control-1/os.img\n 947.56 MiB / 947.56 MiB [==========================================================] 5.13 GiB/s done\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_code.fd to=.hhfab/vlab-vms/control-1/efi_code.fd\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_vars.fd to=.hhfab/vlab-vms/control-1/efi_vars.fd\n01:29:14 INF Resizing VM image (may require sudo password) name=control-1\n01:29:17 INF Initializing TPM name=control-1\n...\n01:29:46 INF Installing VM name=control-1 type=control\n01:29:46 INF Installing VM name=server-01 type=server\n01:29:46 INF Installing VM name=server-02 type=server\n01:29:46 INF Installing VM name=server-03 type=server\n01:29:47 INF Installing VM name=server-04 type=server\n01:29:47 INF Installing VM name=server-05 type=server\n01:29:47 INF Installing VM name=server-06 type=server\n01:29:49 INF Running VM id=0 name=control-1 type=control\n01:29:49 INF Running VM id=1 name=server-01 type=server\n01:29:49 INF Running VM id=2 name=server-02 type=server\n01:29:49 INF Running VM id=3 name=server-03 type=server\n01:29:50 INF Running VM id=4 name=server-04 type=server\n01:29:50 INF Running VM id=5 name=server-05 type=server\n01:29:50 INF Running VM id=6 name=server-06 type=server\n01:29:50 INF Running VM id=7 name=leaf-01 type=switch-vs\n01:29:50 INF Running VM id=8 name=leaf-02 type=switch-vs\n01:29:51 INF Running VM id=9 name=leaf-03 type=switch-vs\n01:29:51 INF Running VM id=10 name=spine-01 type=switch-vs\n01:29:51 INF Running VM id=11 name=spine-02 type=switch-vs\n...\n01:30:41 INF VM installed name=server-06 type=server installer=server-install\n01:30:41 INF VM installed name=server-01 type=server installer=server-install\n01:30:41 INF VM installed name=server-02 type=server installer=server-install\n01:30:41 INF VM installed name=server-04 type=server installer=server-install\n01:30:41 INF VM installed name=server-03 type=server installer=server-install\n01:30:41 INF VM installed name=server-05 type=server installer=server-install\n...\n01:31:04 INF Running installer on VM name=control-1 type=control installer=control-install\n...\n01:35:15 INF Done took=3m39.586394608s\n01:35:15 INF VM installed name=control-1 type=control installer=control-install\n

    After you see VM installed name=control-1, it means that the installer has finished and you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned.

    "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

    Fabricator will create default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

    Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

    "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

    The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

    You can select device you want to access or pass the name using the --vm flag.

    ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n

    On the control node you'll have access to the kubectl, Fabric CLI and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.

    After switches are provisioned you will see something like this:

    core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

    Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. If AppliedG and CurrentG are different it means that the switch is in the process of applying the configuration.

    At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about it in the Running Demo and User Guide sections.

    "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

    You can get the main Fabric objects using kubectl get command on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

    For example, to get the list of switches you can run:

    core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

    Similar for the servers:

    core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

    For connections:

    core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\ncontrol-1--mgmt--leaf-01             management     6h11m\ncontrol-1--mgmt--leaf-02             management     6h11m\ncontrol-1--mgmt--leaf-03             management     6h11m\ncontrol-1--mgmt--spine-01            management     6h11m\ncontrol-1--mgmt--spine-02            management     6h11m\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

    For IPv4 and VLAN namespaces:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
    "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

    To reset VLAB and start over just remove the .hhfab directory and run hhfab init again.

    "},{"location":"vlab/running/#next-steps","title":"Next steps","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to the private environments. Without vendor lock-in.

    Fabric is built around concept of VPCs (Virtual Private Clouds) similar to the public clouds and provides a multi-tenant API to define user intent on network isolation, connectivity and etc which gets automatically transformed into switches and software appliances configuration.

    You can read more about concepts and architecture in the documentation.

    You can find how to download and try Fabric on the self-hosted fully virtualized lab or on the hardware.

    "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

    The Hedgehog Open Network Fabric is an open source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standar BGP EVPN and vxlan technology. The fabric provides a standard kubernetes interfaces to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this we define different abstractions starting from the physical network where we define Connection which defines how a physical server on the network connects to a physical switch on the fabric.

    "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

    The Hedgehog Fabric currently support two underlay network topologies.

    "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

    A collapsed core topology is just a pair of switches connected in a mclag configuration with no other network elements. All workloads attach to these two switches.

    The leaf's in this setup are configured to be in a mclag pair and servers can either be connected to both switches as a mclag port channel or as orphan ports connected to only one switch. both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

    "},{"location":"architecture/fabric/#spine-leaf","title":"Spine - Leaf","text":"

    A spine-leaf topology is a standard clos network with workloads attaching to leaf switches and spines providing connectivity between different leaves.

    This kind of topology is useful for bigger deployments and provides all the advantages of a typical clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. We used RFC7938 as the reference for establishing the underlay network.

    "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

    The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Vitualization is achieved in the hedgehog fabric by encapsulating workload traffic over vxlan tunnels that are source and terminated on the leaf switches in the network. The fabric using BGP-EVPN/Vxlan to enable creation and management of virtual networks on top of the virtual. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the hedgehog fabric is identified by a VPC. In the following sections we will dive a bit deeper into a high level overview of how are vpc's implemented in the hedgehog fabric and it's associated objects.

    "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

    We know what is a VPC and how to attach workloads to a specific VPC. Let us now take a look at how is this actually implemented on the network to provice the view of a private network.

    "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

    To enable communication between 2 different VPC's we need to configure a VPC peering policy. The hedgehog fabric supports two different peering modes.

    "},{"location":"architecture/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

    Hedgehog Open Network Fabric is build on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs) and so you can use standard Kubernetes tools to manage Fabric resources.

    Hedgehog Fabric consists of the following components:

    "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

    All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. It allows to define switches, servers, control nodes, external systems and connections between them in a single place and then use it to deploy and manage the whole infrastructure. On top of it Fabric provides a set of APIs to manage the VPCs and connections between them and between VPCs and External systems.

    "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

    Wiring Diagram consists of the following resources:

    "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":""},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

    Installer builder and VLAB.

    "},{"location":"concepts/overview/#das-boot","title":"Das Boot","text":"

    Switch boot and installation.

    "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

    Control plane and switch agent.

    "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

    This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

    In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

    make serve\n

    Now you can open continuosly updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

    Additionally you can run

    make build\n

    to make sure that your changes will be built correctly and doesn't break documentation.

    "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

    If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

    Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

    Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

    "},{"location":"contribute/docs/#repository","title":"Repository","text":"

    Documentation is organized in per-release branches:

    Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

    "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

    All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

    For example, top-level .pages in this repository looks like this:

    nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

    Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

    More details in the MkDocs Pages plugin.

    "},{"location":"contribute/docs/#abbreaviations","title":"Abbreaviations","text":"

    You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

    For example, we have following in includes/abbreviations.md:

    *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

    It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

    "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

    We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

    To view code for examples, please, check the source code of this page.

    "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

    Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

    Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

    Keyboard keys can be written like so:

    Ctrl+Alt+Del

    Amd inline icons/emojis can be added like this:

    :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

    "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

    Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

    Lorem ipsum

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

    "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

    Details can be found here.

    Simple code block with line nums and higlighted lines:

    bubble_sort.py
    def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

    Code annotations:

    theme:\n  features:\n    - content.code.annotate # (1)\n
    1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
    "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

    You can use Tabs to better organize content.

    CC++
    #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
    #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
    "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

    You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

    graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
    sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
    "},{"location":"contribute/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"getting-started/download/","title":"Download","text":""},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

    Prior to the General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

    After that you will be provided with the credentials to access the software on GitHub Package. In order to use it you need to login to the registry using the following command:

    docker login ghcr.io\n
    "},{"location":"getting-started/download/#downloading-the-software","title":"Downloading the software","text":"

    The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. All software is published into the OCI registry GitHub Package including binaries, container images, helm charts and etc. The latest stable hhfab binary can be downloaded from the GitHub Package using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | bash\n

    Or you can download a specific version using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-X bash\n

    The VERSION environment variable can be used to specify the version of the software to download. If it's not specified the latest release will be downloaded. You can pick specific release series (e.g. alpha-2) or specific release.

    It requires ORAS to be installed which is used to download the binary from the OCI registry and could be installed using following command:

    curl -fsSL https://i.hhdev.io/oras | bash\n

    Currently only Linux x86 is supported for running hhfab.

    "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":""},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

    Under construction.

    In the meantime, to have a look at the working wiring diagram for the Hedgehog Fabric, please run sample generator that produces VLAB-compatible wiring diagrams:

    ubuntu@sl-dev:~$ hhfab wiring sample -h\nNAME:\n   hhfab wiring sample - sample wiring diagram (would work for vlab)\n\nUSAGE:\n   hhfab wiring sample [command options] [arguments...]\n\nOPTIONS:\n   --brief, -b                    brief output (only warn and error) (default: false)\n   --fabric-mode value, -m value  fabric mode (one of: collapsed-core, spine-leaf) (default: \"spine-leaf\")\n   --help, -h                     show help\n   --verbose, -v                  verbose output (includes debug) (default: false)\n\n   wiring generator options:\n\n   --chain-control-link         chain control links instead of all switches directly connected to control node if fabric mode is spine-leaf (default: false)\n   --control-links-count value  number of control links if chain-control-link is enabled (default: 0)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
    "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":"

    You can find more information about using hhfab init in the help message by running it with --help flag.

    "},{"location":"install-upgrade/onie-update/","title":"ONIE Update/Upgrade","text":""},{"location":"install-upgrade/onie-update/#hedgehog-onie-honie-supported-systems","title":"Hedgehog ONIE (HONIE) Supported Systems","text":""},{"location":"install-upgrade/onie-update/#updating-onie","title":"Updating ONIE","text":""},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

    Under construction.

    "},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":""},{"location":"install-upgrade/overview/#main-steps","title":"Main steps","text":"

    This chapter is dedicated to the Hedgehog Fabric installation on the bare-metal control node(s) and switches, their preparation and configuration.

    Please, get hhfab installed following instructions from the Download section.

    Main steps to install Fabric are:

    1. Install hhfab on the machines with access to internet
      1. Prepare Wiring Diagram
      2. Select Fabric Configuration
      3. Build Control Node configuration and installer
    2. Install Control Node
      1. Install Flatcar Linux on the Control Node
      2. Upload and run Control Node installer on the Control Node
    3. Prepare supported switches
      1. Install Hedgehog ONiE (HONiE) on them
      2. Reboot them into ONiE Install Mode and they will be automatically provisioned
    "},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and installer","text":"

    It's the only step that requires internet access to download artifacts and build installer.

    Once you've prepated Wiring Diagram, you can initialize Fabricator by running hhfab init command and passwing optional configuration into it as well as wiring diagram file(s) as flags. Additionally, there are a lot of customizations availble as flags, e.g. to setup default credentials, keys and etc, please, refer to hhfab init --help for more.

    The --dev options allows to enable development mode which will enable default credentials and keys for the Control Node and switches:

    Alternatively, you can pass your own credentials and keys using --authorized-key and --control-password-hash flags. Password hash can be generated using openssl passwd -5 command. Further customizations are available in the config file that could be passed using --config flag.

    hhfab init --preset lab --dev --wiring file1.yaml --wiring file2.yaml\nhhfab build\n

    As a result, you will get the following files in the .hhfab directory or the one you've passed using --basedir flag:

    "},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

    It's fully air-gapped installation and doesn't require internet access.

    Please, download latest stable Flatcar Container Linux ISO from the link and boot into it (using IPMI attaching media, USB stick or any other way).

    Once you've booted into the Flatcar installer, you need to download ignition.json built in the prvious step to it and run Flatcar installation:

    sudo flatcar-install -d /dev/sda -i ignition.json\n

    Where /dev/sda is a disk you want to install Control Node to and ignition.json is the control-os/ignition.json file from previous step downloaded to the Flatcar installer.

    Once the installation is finished, reboot the machine and wait for it to boot into the installed Flatcar Linux.

    At that point, you should get into the installed Flatcar Linux using the dev or provided credentials with user core and you can now install Hedgehog Open Network Fabric on it. Download control-install.tgz to the just installed Control Node (e.g. by using scp) and run it.

    tar xzf control-install.tgz && cd control-install && sudo ./hhfab-recipe run\n

    It'll output log of installing the Fabric (including Kubernetes cluster, OCI registry misc components and etc), you should see following output in the end:

    ...\n01:34:45 INF Running name=reloader-image op=\"push fabricator/reloader:v1.0.40\"\n01:34:47 INF Running name=reloader-chart op=\"push fabricator/charts/reloader:1.0.40\"\n01:34:47 INF Running name=reloader-install op=\"file /var/lib/rancher/k3s/server/manifests/hh-reloader-install.yaml\"\n01:34:47 INF Running name=reloader-wait op=\"wait deployment/reloader-reloader\"\ndeployment.apps/reloader-reloader condition met\n01:35:15 INF Done took=3m39.586394608s\n

    At that point, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s preinstalled as part of the Control Node installer.

    You can now get HONiE installed on your switches and reboot them into ONiE Install Mode and they will be automatically provisioned from the Control Node.

    "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":" Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":" Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":" Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":""},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":""},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":""},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":""},{"location":"reference/api/#agentgithedgehogcomv1alpha2","title":"agent.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the agent v1alpha2 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types","title":"Resource Types","text":""},{"location":"reference/api/#agent","title":"Agent","text":"

    Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

    Field Description apiVersion string agent.githedgehog.com/v1alpha2 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec AgentSpec Spec is the desired state of the Agent status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

    AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

    Appears in: - Agent

    Field Description version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application nosInfo NOSInfo Information about the switch and NOS statusUpdates ApplyStatusUpdate array Status updates from the agent conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#nosinfo","title":"NOSInfo","text":"

    NOSInfo contains information about the switch and NOS received from the switch itself by the agent

    Appears in: - AgentStatus

    Field Description asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" upTime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#dhcpgithedgehogcomv1alpha2","title":"dhcp.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the dhcp v1alpha2 API group. It is the primarely internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":""},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

    DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

    Appears in: - DHCPSubnetStatus

    Field Description ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

    DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primarely internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

    Field Description apiVersion string dhcp.githedgehog.com/v1alpha2 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

    DHCPSubnetSpec defines the desired state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet withing the VPC, such as \"Vlan1000\" as it's named on switch"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

    DHCPSubnetStatus defines the observed state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1alpha2","title":"vpc.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the vpc v1alpha2 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":""},{"location":"reference/api/#external","title":"External","text":"

    External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

    ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

    ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

    ExternalAttachmentSpec defines the desired state of ExternalAttachment

    Appears in: - AgentSpec - ExternalAttachment

    Field Description external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentialy the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

    ExternalAttachmentStatus defines the observed state of ExternalAttachment

    Appears in: - ExternalAttachment

    "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

    ExternalAttachmentSwitch defines the switch port configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description vlan integer VLAN is the VLAN ID used for the subinterface on a switch port specified in the connection ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

    ExternalPeering is the Schema for the externalpeerings API

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

    ExternalPeeringSpec defines the desired state of ExternalPeering

    Appears in: - AgentSpec - ExternalPeering

    Field Description permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

    ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

    ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

    Appears in: - ExternalPeeringSpec

    Field Description vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

    ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

    Appears in: - ExternalPeeringSpecExternal

    Field Description prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for default route ge integer Ge is the minimum prefix length to permit from the External to the VPC, e.g. 24 for /24 le integer Le is the maximum prefix length to permit from the External to the VPC, e.g. 32 for /32"},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

    ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

    ExternalPeeringStatus defines the observed state of ExternalPeering

    Appears in: - ExternalPeering

    "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

    ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

    Appears in: - AgentSpec - External

    Field Description ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the name of the inbound community to filter routes from the external system outboundCommunity string OutboundCommunity is the name of the outbound community that all outbound routes will be stamped with"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

    ExternalStatus defines the observed state of External

    Appears in: - External

    "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

    IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets withing a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

    IPv4NamespaceSpec defines the desired state of IPv4Namespace

    Appears in: - AgentSpec - IPv4Namespace

    Field Description subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

    IPv4NamespaceStatus defines the observed state of IPv4Namespace

    Appears in: - IPv4Namespace

    "},{"location":"reference/api/#vpc","title":"VPC","text":"

    VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

    VPCAttachment is the Schema for the vpcattachments API

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

    VPCAttachmentSpec defines the desired state of VPCAttachment

    Appears in: - AgentSpec - VPCAttachment

    Field Description subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

    VPCAttachmentStatus defines the observed state of VPCAttachment

    Appears in: - VPCAttachment

    "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

    VPCDHCP defines the on-demand DHCP configuration for the subnet

    Appears in: - VPCSubnet

    Field Description relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range is the DHCP range for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

    Underlying type: struct{Start string \"json:\\\"start,omitempty\\\"\"; End string \"json:\\\"end,omitempty\\\"\"}

    VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    "},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

    Appears in: - VPCPeeringSpec

    Field Description subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

    VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed: spec: permit: - vpc-1: {} vpc-2: {}

    Field Description apiVersion string vpc.githedgehog.com/v1alpha2 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

    VPCPeeringSpec defines the desired state of VPCPeering

    Appears in: - AgentSpec - VPCPeering

    Field Description remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets."},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

    VPCPeeringStatus defines the observed state of VPCPeering

    Appears in: - VPCPeering

    "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

    VPCSpec defines the desired state of VPC. At least one subnet is required.

    Appears in: - AgentSpec - VPC

    Field Description subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

    VPCStatus defines the observed state of VPC

    Appears in: - VPC

    Field Description vni integer VNI is the global Fabric-level VNI allocated for the VPC subnetVNIs object (keys:string, values:integer) SubnetVNIs is the map of subnet names to the global Fabric-level VNIs allocated for the VPC subnets"},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

    VPCSubnet defines the VPC subnet configuration

    Appears in: - VPCSpec

    Field Description subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan string VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace"},{"location":"reference/api/#wiringgithedgehogcomv1alpha2","title":"wiring.githedgehog.com/v1alpha2","text":"

    Package v1alpha2 contains API Schema definitions for the wiring v1alpha2 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":""},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

    BasePortName defines the full name of the switch port

    Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnMgmtLinkServer - ConnMgmtLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

    ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

    Appears in: - ConnectionSpec

    Field Description links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"

    ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

    ConnExternal defines the external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

    ConnExternalLink defines the external connection link

    Appears in: - ConnExternal

    Field Description switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

    ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

    Appears in: - ConnectionSpec

    Field Description links FabricLink array Links is the list of spine-to-leaf links"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

    ConnFabricLinkSwitch defines the switch side of the fabric link

    Appears in: - FabricLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration)"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

    ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

    ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

    Appears in: - ConnectionSpec

    Field Description peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGP traffic between switches"},{"location":"reference/api/#connmgmt","title":"ConnMgmt","text":"

    ConnMgmt defines the management connection (single control node/server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnMgmtLink"},{"location":"reference/api/#connmgmtlink","title":"ConnMgmtLink","text":"

    ConnMgmtLink defines the management connection link

    Appears in: - ConnMgmt

    Field Description server ConnMgmtLinkServer Server is the server side of the management link switch ConnMgmtLinkSwitch Switch is the switch side of the management link"},{"location":"reference/api/#connmgmtlinkserver","title":"ConnMgmtLinkServer","text":"

    ConnMgmtLinkServer defines the server side of the management link

    Appears in: - ConnMgmtLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the server side of the management link (control node port configuration) mac string MAC is an optional MAC address of the control node port for the management link, if specified will be used to create a \"virtual\" link with the connection names on the control node"},{"location":"reference/api/#connmgmtlinkswitch","title":"ConnMgmtLinkSwitch","text":"

    ConnMgmtLinkSwitch defines the switch side of the management link

    Appears in: - ConnMgmtLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the management link (switch port configuration) oniePortName string ONIEPortName is an optional ONIE port name of the switch side of the management link that's only used by the IPv6 Link Local discovery"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

    ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description link ConnStaticExternalLink Link is the static external connection link"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

    ConnStaticExternalLink defines the static external connection link

    Appears in: - ConnStaticExternal

    Field Description switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

    ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

    Appears in: - ConnStaticExternalLink

    Field Description port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\". SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

    ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

    ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

    Appears in: - ConnectionSpec

    Field Description links SwitchToSwitchLink array Links is the list of VPC loopback links"},{"location":"reference/api/#connection","title":"Connection","text":"

    Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

    ConnectionSpec defines the desired state of Connection

    Appears in: - AgentSpec - Connection

    Field Description unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) management ConnMgmt Management defines the management connection (single control node/server to a single switch with a single link) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

    ConnectionStatus defines the observed state of Connection

    Appears in: - Connection

    "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

    FabricLink defines the fabric connection link

    Appears in: - ConnFabric

    Field Description spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#location","title":"Location","text":"

    Location defines the geopraphical position of the device in a datacenter

    Appears in: - SwitchSpec

    Field Description location string aisle string row string rack string slot string"},{"location":"reference/api/#locationsig","title":"LocationSig","text":"

    LocationSig contains signatures for the location UUID as well as the device location itself

    Appears in: - SwitchSpec

    Field Description sig string uuidSig string"},{"location":"reference/api/#rack","title":"Rack","text":"

    Rack is the Schema for the racks API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Rack metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec RackSpec status RackStatus"},{"location":"reference/api/#rackposition","title":"RackPosition","text":"

    RackPosition defines the geopraphical position of the rack in a datacenter

    Appears in: - RackSpec

    Field Description location string aisle string row string"},{"location":"reference/api/#rackspec","title":"RackSpec","text":"

    RackSpec defines the properties of a rack which we are modelling

    Appears in: - Rack

    Field Description numServers integer hasControlNode boolean hasConsoleServer boolean position RackPosition"},{"location":"reference/api/#rackstatus","title":"RackStatus","text":"

    RackStatus defines the observed state of Rack

    Appears in: - Rack

    "},{"location":"reference/api/#server","title":"Server","text":"

    Server is the Schema for the servers API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

    ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

    ServerSpec defines the desired state of Server

    Appears in: - Server

    Field Description type ServerType Type is the type of server, could be control for control nodes or default (empty string) for everything else description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

    ServerStatus defines the observed state of Server

    Appears in: - Server

    "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

    ServerToSwitchLink defines the server-to-switch link

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#servertype","title":"ServerType","text":"

    Underlying type: string

    ServerType is the type of server, could be control for control nodes or default (empty string) for everything else

    Appears in: - ServerSpec

    "},{"location":"reference/api/#switch","title":"Switch","text":"

    Switch is the Schema for the switches API All switches should always have 1 labels defined: wiring.githedgehog.com/rack. It represents name of the rack it belongs to.

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

    SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

    SwitchGroupSpec defines the desired state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

    SwitchGroupStatus defines the observed state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchredundancy","title":"SwitchRedundancy","text":"

    SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

    Appears in: - SwitchSpec

    Field Description name string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

    Underlying type: string

    SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

    Appears in: - AgentSpec - SwitchSpec

    "},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

    SwitchSpec defines the desired state of Switch

    Appears in: - AgentSpec - Switch

    Field Description role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric location Location Location is the location of the switch, it is used to generate the location UUID and location signature locationSig LocationSig LocationSig is the location signature for the switch groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\""},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

    SwitchStatus defines the observed state of Switch

    Appears in: - Switch

    "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

    SwitchToSwitchLink defines the switch-to-switch link

    Appears in: - ConnMCLAGDomain - ConnVPCLoopback

    Field Description switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

    VLANNamespace is the Schema for the vlannamespaces API

    Field Description apiVersion string wiring.githedgehog.com/v1alpha2 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

    VLANNamespaceSpec defines the desired state of VLANNamespace

    Appears in: - AgentSpec - VLANNamespace

    Field Description ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

    VLANNamespaceStatus defines the observed state of VLANNamespace

    Appears in: - VLANNamespace

    "},{"location":"reference/cli/","title":"Fabric CLI","text":"

    Under construction.

    Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

    core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
    "},{"location":"reference/cli/#vpc","title":"VPC","text":"

    Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

    core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

    Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

    core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

    To peer VPC with another VPC (e.g. vpc-2) use the following command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
    "},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"

    Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)

    "},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":""},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":""},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":""},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":""},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

    Support for 3rd party DHCP server (DHCP Relay config) through the API

    "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

    A single controller. No controller redundancy.

    "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

    For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

    Spine controller connectivity is not supported.

    For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

    Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

    "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":""},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

    Switches not directly connecting to the controllers can chain boot via the data network.

    "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

    CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

    "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

    server leaf, border leaf, and mixed leaf modes are supported.

    "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

    Two ToR/LEAF switches with MCLAG server connection.

    "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

    MCLAG-only.

    "},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":""},{"location":"release-notes/#spines","title":"SPINEs","text":""},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

    Port speed, port group speed, port breakouts are configurable through the API

    "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

    VXLAN-based BGP eVPN.

    "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

    A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

    "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

    Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

    "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

    VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

    This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

    "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

    Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

    "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

    VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

    "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

    VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

    The user controls what subnets/prefixes to import and export from/to the external system.

    No NAT function is supported for external peering.

    "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

    Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

    "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

    VPC is provided with an optional DHCP service with simple IPAM

    "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

    To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

    "},{"location":"release-notes/#scale","title":"Scale","text":""},{"location":"release-notes/#software-versions","title":"Software versions","text":""},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":""},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":""},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

    Under construction.

    "},{"location":"user-guide/connections/","title":"Connections","text":"

    The Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all connections between the devices in the Wiring Diagram.

    There are multiple types of connections.

    "},{"location":"user-guide/connections/#server-connections-user-facing","title":"Server connections (user-facing)","text":"

    Server connections are used to connect workload servers to the switches.

    "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

    Unbundled server connections are used to connect servers to the single switche using a single port.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
    "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

    Bundled server connections are used to connect servers to the single switch using multiple ports (port channel, LAG).

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

    MCLAG server connections are used to connect servers to the pair of switches using multiple ports (Dual-homing).

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

    Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

    "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

    Connections between specific spine and leaf, covers all actual wires between a single pair.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

    Used to define a pair of MCLAG switches with Session and Peer link between them.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
    "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

    Required to implement a workaround for the local VPC peering (when both VPC are attached to the same switch) which is caused by the hardware limitation of the currently supported switches.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
    "},{"location":"user-guide/connections/#management","title":"Management","text":"

    Connection to the Control Node.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: control-1--mgmt--s5248-01-front\n  namespace: default\nspec:\n  management:\n    link: # Defines a single link between a control node and a switch\n      server:\n        ip: 172.30.20.0/31\n        port: control-1/enp2s1\n      switch:\n        ip: 172.30.20.1/31\n        port: s5248-01/Ethernet0\n
    "},{"location":"user-guide/connections/#connecting-fabric-to-outside-world","title":"Connecting Fabric to outside world","text":"

    Provides connectivity to the outside world, e.g. internet, other networks or some other systems such as DHCP, NTP, LMA, AAA services.

    "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

    Simple way to connect things like DHCP server directly to the Fabric by connecting it to specific switch ports.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port, if 0 - no VLAN is configured\n        subnets: # List of subnets that will be routed to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address that will be used when configuring static routes for the \"subnets\" list\n
    "},{"location":"user-guide/connections/#external","title":"External","text":"

    Connection to the external systems, e.g. edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what's getting advertised and which routes are accepted.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
    "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

    All devices in the Hedgehog Fabric are divided into two groups: switches and servers and represented by corresponding Switch and Server objects in the API. It's needed to define all participants of the Fabric and their roles in the Wiring Diagram as well as Connections between them.

    "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

    Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API and consists of the basic information like name, description, location, role, etc. as well as port group speeds, port breakouts, ASN, IP addresses and etc.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  location:\n    location: gen--default--s5248-01\n  locationSig:\n    sig: <undefined>\n    uuidSig: <undefined>\n  portBreakouts: # Configures port breakouts for the switch\n    1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch\n    \"1\": 10G\n    \"2\": 10G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to\n  - some-group\n

    The SwitchGroup is just a marker at that point and doesn't have any configuration options.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
    "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

    It includes both control nodes and user's workload servers.

    Control Node:

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: control-1\n  namespace: default\nspec:\n  type: control # Type of the server, one of control or \"\" (empty) for regular workload server\n

    Regular workload server:

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
    "},{"location":"user-guide/external/","title":"External Peering","text":"

    Hedgehog Fabric uses Border Leaf concept to exchange VPC routes outside the Fabric and providing L3 connectivity. External Peering feature allows to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

    Hedgehog Fabric does not operate Edge side devices.

    "},{"location":"user-guide/external/#overview","title":"Overview","text":"

    Traffic exit from the Fabric is done on Border Leafs that are connected with Edge devices. Border Leafs are suitable to terminate l2vpn connections and distinguish VPC L3 routable traffic towards Edge device as well as to land VPC servers. Border Leafs (or Borders) can connect to several Edge devices.

    External Peering is only available on the switch devices that are capable for sub-interfaces.

    "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

    In order to distinguish VPC traffic Edge device should be capable for - Set up BGP IPv4 to advertise and receive routes from the Fabric - Connect to Fabric Border Leaf over Vlan - Be able to mark egress routes towards the Fabric with BGP Communities - Be able to filter ingress routes from the Fabric by BGP Communities

    All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

    "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

    Fabric is sharing VPC routes with Edge devices via BGP. Peering is done over Vlan in IPv4 Unicast AFI/SAFI.

    "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

    VPC L3 routable traffic will be tagged with Vlan and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

    "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

    Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that VPC can export to Edge devices and import from the Edge devices.

    "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

    General configuration starts with specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with following config

    Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
    "},{"location":"user-guide/external/#connection","title":"Connection","text":"

    Connection of type external is used to identify switch port on Border leaf that is cabled with an Edge device.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwtichName/EthernetXXX\n
    "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

    External Attachment is a definition of BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to Connection with type external and specify Vlan that will be used to segregate particular Edge peering.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP Address on the Border Leaf to set up BGP peering\n    vlan: # Vlan ID to tag control and data traffic\n

    Several External Attachment can be configured for the same Connection but for different vlan.

    "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

    To allow specific VPC have access to Edge devices VPC should be bound to specific External object. This is done via External Peering object.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes(routes) to be allowed to pick up from External\n      - # IPv4 Prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n
    Prefixes can be specified as exact match or with mask range indicators le and ge keywords. le is identifying prefixes lengths that are less than or equal and ge for prefixes lengths that are greater than or equal.

    Example: Allow ANY IPv4 prefix that came from External - allow all prefixes that match default route with any prefix length

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n
    ge and le can also be combined.

    Example:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 24\n        ge: 16\n        prefix: 77.0.0.0/8\n
    For instance, 77.42.0.0/18 will be matched for given prefix rule above, but 77.128.77.128/25 or 77.10.0.0/16 won't.

    "},{"location":"user-guide/external/#examples","title":"Examples","text":"

    This example will show peering with External object with name HedgeEdge given Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable between an Edge device on the port Ethernet42. vpc-1 is required to receive any prefixes advertised from the External.

    "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"

    # hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n

    "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

    Connection should be specified in the wiring diagram.

    ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
    "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

    Specified in wiring diagram

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n

    "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
    "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

    NOTE: Hedgehog does not recommend using following configuration for production. It's just as example of Edge Peer config

    Interface config

    interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

    BGP Config

    !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n
    Route Map configuration
    route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n

    "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

    It's an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

    You'll need to define Server object per each server running Harvester and Connection object per each server connection to the switches.

    You can have multiple VPCs created and attached to the Connections to this servers to make them available to the VMs in Harvester or any other hypervisor.

    "},{"location":"user-guide/harvester/#congigure-harvester","title":"Congigure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

    From the \"Cluster Network/Confg\" side menu. Create a new Cluster Network.

    Here is what the CRD looks like cleaned up:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
    "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

    By clicking \"Create Network Confg\". Add your connections and select bonding type.

    The resulting cleaned up CRD:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
    "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

    Browse over to \"VM Networks\" and add one for each Vlan you want to support, assigning them to the cluster network.

    Here is what the CRDs will look like for both vlans:

    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
    "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

    Now you can choose created VM Networks when creating a VM in Harvester and have them created as part of the VPC.

    "},{"location":"user-guide/overview/","title":"Overview","text":"

    The chapter is intended to give an overview of the main features of the Hedgehog Fabric and their usage.

    "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

    Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits to which subnets could be used by VPC to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits to which switches VPC could be attached to guarantee non-overlapping VLANs\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified\n          start: 10.10.1.10\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      vlan: \"1001\" # User-defined VLAN from vlan namespace\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: \"1002\"\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: \"1100\"\n

    In case if you're using thirt-party DHCP server by configuring spec.subnets.<subnet>.dhcp.relay additional information will be added to the DHCP packet it forwards to the DHCP server to make it possible to identify the VPC and subnet. The information is added under the RelayAgentInfo option(82) on the DHCP packet. The relay sets two suboptions in the packet

    "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

    Represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

    VPC could be attached to a switch which is a part of the VLAN namespace used by the VPC.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n
    "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

    It enables VPC to VPC connectivity. There are tw o types of VPC peering:

    VPC peering is only possible between VPCs attached to the same IPv4 namespace.

    Local:

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-3\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # meaning all subnets of two VPCs will be able to communicate to each other\n    vpc-3: {} # more advanced filtering will be supported in future releases\n

    Remote:

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # indicates a switch group to implement the peering on\n
    "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

    Defines non-overlapping VLAN ranges for attaching servers. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges.

    apiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of the subnets that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
    "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

    Defines non-overlapping IPv4 ranges for VPC subnets. Each VPC belongs to a specific IPv4 namespace.

    apiVersion: wiring.githedgehog.com/v1alpha2\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
    "},{"location":"vlab/demo/","title":"Demo on VLAB","text":"

    Goal of this demo is to show how to use VPCs, attach and peer them and test connectivity between the servers. Examples are based on the default VLAB topology.

    You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

    "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

    The default topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

    For more details on the customizing topologies see Running VLAB section.

    In the default topology, the following Control Node and Switch VMs are created:

    graph TD\n    CN[Control Node]\n\n    S1[Spine 1]\n    S2[Spine 2]\n\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    CN --> L1\n    CN --> L2\n\n    S1 --> L1\n    S1 --> L2\n    S2 --> L2\n    S2 --> L3

    As well as test servers:

    graph TD\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    TS1[Test Server 1]\n    TS2[Test Server 2]\n    TS3[Test Server 3]\n    TS4[Test Server 4]\n    TS5[Test Server 5]\n    TS6[Test Server 6]\n\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS3 --> L1\n    TS4 --> L2\n\n    TS5 --> L3\n    TS6 --> L4
    "},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

    You can create and attach VPCs to the VMs using the kubectl fabric vpc command on control node or outside of cluster using the kubeconfig. For example, run following commands to create a 2 VPCs with a single subnet each, DHCP server enabled with optional IP address range start defined and attach them to some test servers:

    core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

    VPC subnet should belong to some IPv4Namespace, default one in the VLAB is 10.0.0.0/16:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

    After you created VPCs and VPCAttachments, you can check the status of the agents to make sure that requested configuration was apploed to the switches:

    core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

    As you can see columns APPLIED and APPLIEDG are equal which means that requested configuration was applied.

    "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

    You can use hhfab vlab ssh on the host to ssh into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure bond with a vlan on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a vlan and they both will get an IP address from the DHCP server. You can use ip command to configure networking on the servers or use little helper preinstalled by Fabricator on test servers.

    For server-01:

    core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

    And for server-02:

    core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
    "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

    You can test connectivity between the servers before peering the switches using ping command:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
    "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

    To enable connectivity between the VPCs, you need to peer them using kubectl fabric vpc peer command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

    Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that you can test connectivity between the servers again:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

    If you will delete VPC peering using command following command and wait for the agent to apply configuration on the switches, you will see that connectivity will be lost again:

    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

    You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

    core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
    "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

    First of all, we'll need to make sure that we have a second IPv4Namespace with the same subnet as default one:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

    Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

    core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: \"2001\"\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

    At that point you can setup networking on the server-03 same as for server-01 and server-02 in a previous sections and see that we have now server-01 and server-03 with the IP addresses from the same subnets.

    "},{"location":"vlab/overview/","title":"Overview","text":"

    It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its looka and feel, API, capabilities and etc. It's not suitable for any data plane or performance testing as well as not for production use.

    In the VLAB all switches will start as an empty VMs with only ONiE image on them and will go through the whole discovery, boot and installation process like on the real hardware.

    "},{"location":"vlab/overview/#overview_1","title":"Overview","text":"

    The hhfab CLI provides a special command vlab to manage the virtual labs. It allows to run set of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and automatically runs installer to get Fabric up and running.

    You can find more information about getting hhfab in the download section.

    "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

    Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

    Following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat and docker will be required to login into OCI registry.

    By default, VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

    You can calculate the system requirements based on the allocated resources to the VMs using the following table:

    Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

    Which gives approximately the following requirements for the default topologies:

    Usually, non of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

    NVMe SSD for VM disks is highly recommended.

    "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing prerequisites","text":"

    On Ubuntu 22.04 LTS you can install all required packages using the following commands:

    curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
    sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

    Good output of the kvm-ok command should look like this:

    ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
    "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":""},{"location":"vlab/running/","title":"Running VLAB","text":"

    Please, make sure to follow prerequisites and check system requirements in the VLAB Overview section before running VLAB.

    "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

    As a first step you need to initialize Fabricator for the VLAB by running hhfab init --preset vlab (or -p vlab). It supports a lot of customization options which you can find by adding --help to the command. If you want to tune the topology used for the VLAB you can use --fabric-mode (or -m) flag to choose between spine-leaf (default) and collapsed-core topologies as well as you can configure the number of spines, leafs, connections and etc. For example, --spines-count and --mclag-leafs-count flags allows to set number of spines and MCLAG leafs respectively.

    So, by default you'll get 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC Loopback workaround.

    ubuntu@docs:~$ hhfab init -p vlab\n01:17:44 INF Generating wiring from gen flags\n01:17:44 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:17:44 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:17:44 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=1\n01:17:44 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:17:44 INF                     >>> vpcLoopbacks=2\n01:17:44 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:17:44 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Or if you want to run Collapsed Core topology with 2 MCLAG switches:

    ubuntu@docs:~$ hhfab init -p vlab -m collapsed-core\n01:20:07 INF Generating wiring from gen flags\n01:20:07 INF Building wiring diagram fabricMode=collapsed-core chainControlLink=false controlLinksCount=0\n01:20:07 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=0\n01:20:07 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:20:07 INF                     >>> vpcLoopbacks=2\n01:20:07 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:20:07 INF Initialized preset=vlab fabricMode=collapsed-core config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Or you can run custom topology with 2 spines, 4 MCLAG leafs and 2 non-MCLAG leafs using flags:

    ubuntu@docs:~$ hhfab init -p vlab --mclag-leafs-count 4 --orphan-leafs-count 2\n01:21:53 INF Generating wiring from gen flags\n01:21:53 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:21:53 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:21:53 INF                     >>> mclagLeafsCount=4 orphanLeafsCount=2\n01:21:53 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:21:53 INF                     >>> vpcLoopbacks=2\n01:21:53 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:21:53 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

    Additionally, you can do extra Fabric configuration using flags on init command or by passing config file, more information about it is available in the Fabric Configuration section.

    Once you have initialized the VLAB you need to download all artifacts and build the installer using hhfab build command. It will automatically download all required artifacts from the OCI registry and build the installer as well as all other prerequisites for running the VLAB.

    "},{"location":"vlab/running/#build-the-installer-and-vlab","title":"Build the installer and VLAB","text":"
    ubuntu@docs:~$ hhfab build\n01:23:33 INF Building component=base\n01:23:33 WRN Attention! Development mode enabled - this is not secure! Default users and keys will be created.\n...\n01:23:33 INF Building component=control-os\n01:23:33 INF Building component=k3s\n01:23:33 INF Downloading name=m.l.hhdev.io:31000/githedgehog/k3s:v1.27.4-k3s1 to=.hhfab/control-install\nCopying k3s-airgap-images-amd64.tar.gz  187.36 MiB / 187.36 MiB   \u2819   0.00 b/s done\nCopying k3s                               56.50 MiB / 56.50 MiB   \u2819   0.00 b/s done\n01:23:35 INF Building component=zot\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/zot:v1.4.3 to=.hhfab/control-install\nCopying zot-airgap-images-amd64.tar.gz  19.24 MiB / 19.24 MiB   \u2838   0.00 b/s done\n01:23:35 INF Building component=misc\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/fabricator/k9s:v0.27.4 to=.hhfab/control-install\nCopying k9s  57.75 MiB / 57.75 MiB   \u283c   0.00 b/s done\n...\n01:25:40 INF Planned bundle=control-install name=fabric-api-chart op=\"push fabric/charts/fabric-api:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-image op=\"push fabric/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-chart op=\"push fabric/charts/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent-seeder op=\"push fabric/agent/x86_64:latest\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent op=\"push fabric/agent:v0.23.0\"\n...\n01:25:40 INF Recipe created bundle=control-install actions=67\n01:25:40 INF Creating recipe bundle=server-install\n01:25:40 INF Planned bundle=server-install name=toolbox op=\"file /opt/hedgehog/toolbox.tar\"\n01:25:40 INF Planned bundle=server-install name=toolbox-load op=\"exec ctr\"\n01:25:40 INF Planned bundle=server-install name=hhnet op=\"file /opt/bin/hhnet\"\n01:25:40 INF Recipe created bundle=server-install actions=3\n01:25:40 INF Building done took=2m6.813384532s\n01:25:40 INF Packing bundle=control-install target=control-install.tgz\n01:25:45 INF Packing bundle=server-install target=server-install.tgz\n01:25:45 INF Packing done took=5.67007384s\n

    As soon as it's done you can run the VLAB using hhfab vlab up command. It will automatically start all VMs and run the installers on the control node and test servers. It will take some time for all VMs to come up and for the installer to finish, you will see the progress in the output. If you stop the command, it'll stop all VMs, and you can re-run it to get VMs back up and running.

    "},{"location":"vlab/running/#run-vms-and-installers","title":"Run VMs and installers","text":"
    ubuntu@docs:~$ hhfab vlab up\n01:29:13 INF Starting VLAB server... basedir=.hhfab/vlab-vms vm-size=\"\" dry-run=false\n01:29:13 INF VM id=0 name=control-1 type=control\n01:29:13 INF VM id=1 name=server-01 type=server\n01:29:13 INF VM id=2 name=server-02 type=server\n01:29:13 INF VM id=3 name=server-03 type=server\n01:29:13 INF VM id=4 name=server-04 type=server\n01:29:13 INF VM id=5 name=server-05 type=server\n01:29:13 INF VM id=6 name=server-06 type=server\n01:29:13 INF VM id=7 name=leaf-01 type=switch-vs\n01:29:13 INF VM id=8 name=leaf-02 type=switch-vs\n01:29:13 INF VM id=9 name=leaf-03 type=switch-vs\n01:29:13 INF VM id=10 name=spine-01 type=switch-vs\n01:29:13 INF VM id=11 name=spine-02 type=switch-vs\n01:29:13 INF Total VM resources cpu=\"38 vCPUs\" ram=\"36352 MB\" disk=\"410 GB\"\n...\n01:29:13 INF Preparing VM id=0 name=control-1 type=control\n01:29:13 INF Copying files  from=.hhfab/control-os/ignition.json to=.hhfab/vlab-vms/control-1/ignition.json\n01:29:13 INF Copying files  from=.hhfab/vlab-files/flatcar.img to=.hhfab/vlab-vms/control-1/os.img\n 947.56 MiB / 947.56 MiB [==========================================================] 5.13 GiB/s done\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_code.fd to=.hhfab/vlab-vms/control-1/efi_code.fd\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_vars.fd to=.hhfab/vlab-vms/control-1/efi_vars.fd\n01:29:14 INF Resizing VM image (may require sudo password) name=control-1\n01:29:17 INF Initializing TPM name=control-1\n...\n01:29:46 INF Installing VM name=control-1 type=control\n01:29:46 INF Installing VM name=server-01 type=server\n01:29:46 INF Installing VM name=server-02 type=server\n01:29:46 INF Installing VM name=server-03 type=server\n01:29:47 INF Installing VM name=server-04 type=server\n01:29:47 INF Installing VM name=server-05 type=server\n01:29:47 INF Installing VM name=server-06 type=server\n01:29:49 INF Running VM id=0 name=control-1 type=control\n01:29:49 INF Running VM id=1 name=server-01 type=server\n01:29:49 INF Running VM id=2 name=server-02 type=server\n01:29:49 INF Running VM id=3 name=server-03 type=server\n01:29:50 INF Running VM id=4 name=server-04 type=server\n01:29:50 INF Running VM id=5 name=server-05 type=server\n01:29:50 INF Running VM id=6 name=server-06 type=server\n01:29:50 INF Running VM id=7 name=leaf-01 type=switch-vs\n01:29:50 INF Running VM id=8 name=leaf-02 type=switch-vs\n01:29:51 INF Running VM id=9 name=leaf-03 type=switch-vs\n01:29:51 INF Running VM id=10 name=spine-01 type=switch-vs\n01:29:51 INF Running VM id=11 name=spine-02 type=switch-vs\n...\n01:30:41 INF VM installed name=server-06 type=server installer=server-install\n01:30:41 INF VM installed name=server-01 type=server installer=server-install\n01:30:41 INF VM installed name=server-02 type=server installer=server-install\n01:30:41 INF VM installed name=server-04 type=server installer=server-install\n01:30:41 INF VM installed name=server-03 type=server installer=server-install\n01:30:41 INF VM installed name=server-05 type=server installer=server-install\n...\n01:31:04 INF Running installer on VM name=control-1 type=control installer=control-install\n...\n01:35:15 INF Done took=3m39.586394608s\n01:35:15 INF VM installed name=control-1 type=control installer=control-install\n

    After you see VM installed name=control-1, it means that the installer has finished and you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned.

    "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

    Fabricator will create default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

    Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

    "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

    The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

    You can select device you want to access or pass the name using the --vm flag.

    ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n

    On the control node you'll have access to the kubectl, Fabric CLI and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.

    After switches are provisioned you will see something like this:

    core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

    Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. If AppliedG and CurrentG are different it means that the switch is in the process of applying the configuration.

    At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about it in the Running Demo and User Guide sections.

    "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

    You can get the main Fabric objects using kubectl get command on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

    For example, to get the list of switches you can run:

    core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

    Similar for the servers:

    core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

    For connections:

    core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\ncontrol-1--mgmt--leaf-01             management     6h11m\ncontrol-1--mgmt--leaf-02             management     6h11m\ncontrol-1--mgmt--leaf-03             management     6h11m\ncontrol-1--mgmt--spine-01            management     6h11m\ncontrol-1--mgmt--spine-02            management     6h11m\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

    For IPv4 and VLAN namespaces:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
    "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

    To reset VLAB and start over just remove the .hhfab directory and run hhfab init again.

    "},{"location":"vlab/running/#next-steps","title":"Next steps","text":""}]} \ No newline at end of file diff --git a/dev/sitemap.xml.gz b/dev/sitemap.xml.gz index 23376d569c1856a6097b315526e1b1e9844636ea..0b261ad5ae829a610c21dc97d970795cbefa7595 100644 GIT binary patch delta 16 XcmdnVypx$-zMF%?efg%1?3)+?D?|lF delta 16 YcmdnVypx$-zMF&N!Oo2v**7r)0594F?*IS*