diff --git a/docs/concepts/overview.md b/docs/concepts/overview.md
index b903776d..44e43209 100644
--- a/docs/concepts/overview.md
+++ b/docs/concepts/overview.md
@@ -33,7 +33,7 @@ Wiring Diagram consists of the following resources:
* __Server__: *any* physical server attached to the Fabric including Control Nodes
* __Connection__: *any* logical connection for devices
* usually it's a connection between two or more ports on two different devices
- * for example: MCLAG Peer Link, Unbundled/MCLAG server connections, Fabric connection between spine and leaf
+ * for example: Fabric connection between spine and leaf, and server connections like Unbundled, Bundled, MCLAG, or ESLAG.
* __VLANNamespace__ -> non-overlapping VLAN ranges for attaching servers
* __IPv4Namespace__ -> non-overlapping IPv4 ranges for VPC subnets
diff --git a/docs/install-upgrade/build-wiring.md b/docs/install-upgrade/build-wiring.md
index 631af359..16b7af96 100644
--- a/docs/install-upgrade/build-wiring.md
+++ b/docs/install-upgrade/build-wiring.md
@@ -42,7 +42,7 @@ spec:
1. See the [list](../reference/profiles.md) of profile names
2. More information in the [User Guide](../user-guide/profiles.md#port-naming)
-3. Could be MCLAG, ESLAG or nothing, more details in [Redundancy
+3. Could be ESLAG or nothing, more details in [Redundancy
Groups](../user-guide/devices.md#redundancy-groups)
## Design Discussion
@@ -79,12 +79,11 @@ A connection represents the physical wires in your data center. They connect swi
#### Server Connections
-A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:
+A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (Unbundled, Bundled, ESLAG, etc.). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:
- *Unbundled* - A single cable connecting switch to server.
- *Bundled* - Two or more cables going to a single switch, a LAG or similar.
-- *MCLAG* - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
-- *ESLAG* - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
+- *ESLAG* - Two to four cables going to different switches, also called multi-homing (EVPN-MH). If four links are used there will need to be four switches connected to a single server with four NIC ports.
``` mermaid
graph TD
@@ -95,45 +94,33 @@ graph TD
L3([Leaf 3])
L4([Leaf 4])
L5([Leaf 5])
- L6([Leaf 6])
- L7([Leaf 7])
TS1[Server1]
TS2[Server2]
TS3[Server3]
- TS4[Server4]
- S1 & S2 ---- L1 & L2 & L3 & L4 & L5 & L6 & L7
+ S1 & S2 ---- L1 & L2 & L3 & L4 & L5
L1 <-- Bundled --> TS1
L1 <-- Bundled --> TS1
L1 <-- Unbundled --> TS2
- L2 <-- MCLAG --> TS3
- L3 <-- MCLAG --> TS3
- L4 <-- ESLAG --> TS4
- L5 <-- ESLAG --> TS4
- L6 <-- ESLAG --> TS4
- L7 <-- ESLAG --> TS4
+ L2 <-- ESLAG --> TS3
+ L3 <-- ESLAG --> TS3
+ L4 <-- ESLAG --> TS3
+ L5 <-- ESLAG --> TS3
subgraph VPC 1
TS1
TS2
TS3
- TS4
- end
-
- subgraph MCLAG
- L2
- L3
end
subgraph ESLAG
+ L2
L3
L4
L5
- L6
- L7
end
-
+
```
#### Fabric Connections
diff --git a/docs/reference/profiles.md b/docs/reference/profiles.md
index b0f2ae5d..6dac9712 100644
--- a/docs/reference/profiles.md
+++ b/docs/reference/profiles.md
@@ -28,6 +28,9 @@ features and port naming scheme.
## Switch Feature Matrix
+!!! warning "MCLAG Deprecation"
+ MCLAG is being deprecated in favor of ESLAG (EVPN Multi-Homing) for multi-homing. While still supported, it is recommended to use ESLAG for new deployments.
+
The following table shows which features are supported by each switch profile:
| Switch Profile | Subinterfaces | ACLs | L2VNI | L3VNI | RoCE | MCLAG | ESLAG | QPN |
diff --git a/docs/troubleshooting/overview.md b/docs/troubleshooting/overview.md
index 9f35816b..4153b99c 100644
--- a/docs/troubleshooting/overview.md
+++ b/docs/troubleshooting/overview.md
@@ -10,14 +10,12 @@ command:
```console
core@control-1 ~ $ kubectl fabric inspect fabric
Switches:
-NAME PROFILE ROLE GROUPS SERIAL STATE GEN APPLIED HEARTBEAT
-leaf-01 Virtual Switch server-leaf mclag-1 000000000 Ready 1/1 4 minutes ago 15 seconds ago
-leaf-02 Virtual Switch server-leaf mclag-1 000000000 Ready 1/1 3 minutes ago 19 seconds ago
-leaf-03 Virtual Switch server-leaf eslag-1 000000000 Ready 2/2 5 minutes ago 12 seconds ago
-leaf-04 Virtual Switch server-leaf eslag-1 000000000 Ready 2/2 3 minutes ago 17 seconds ago
-leaf-05 Virtual Switch server-leaf 000000000 Ready 2/2 5 minutes ago 9 seconds ago
-spine-01 Virtual Switch spine 000000000 Ready 1/1 3 minutes ago 19 seconds ago
-spine-02 Virtual Switch spine 000000000 Ready 2/2 4 minutes ago 1 second ago
+NAME PROFILE ROLE GROUPS SERIAL STATE GEN APPLIED HEARTBEAT
+leaf-01 Virtual Switch server-leaf eslag-1 0000000000000000000 Ready 1/1 10 minutes ago 22 seconds ago
+leaf-02 Virtual Switch server-leaf eslag-1 0000000000000000000 Ready 1/1 21 minutes ago 19 seconds ago
+leaf-03 Virtual Switch server-leaf 0000000000000000000 Ready 1/1 38 minutes ago 10 seconds ago
+spine-01 Virtual Switch spine 0000000000000000000 Ready 1/1 15 minutes ago 10 seconds ago
+spine-02 Virtual Switch spine 0000000000000000000 Ready 1/1 45 minutes ago 24 seconds ago
```
The output above is from the virtual testing environment. In a deployment of physical
@@ -25,7 +23,7 @@ switches, the profile would match the profile of the switch, and the correct
serial number would be displayed.
The `GROUP` column will be populated if you have redundancy configured on the
-switches, either MCLAG, or ESLAG.
+switches, such as ESLAG (EVPN Multi-Homing).
The `GEN` column shows the applied/current generation. If the numbers are equal
then there are no pending changes for the switches.
diff --git a/docs/user-guide/connections.md b/docs/user-guide/connections.md
index 7a3afa8c..7970c2d5 100644
--- a/docs/user-guide/connections.md
+++ b/docs/user-guide/connections.md
@@ -58,37 +58,11 @@ spec:
port: s5248-01/E1/2
```
-### MCLAG
-
-MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing).
-Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type `mclag`
-and a Connection with type `mclag-domain` between them. MCLAG switches should also have the same `spec.ASN` and
-`spec.VTEPIP`. The server interfaces should be configured for 802.3ad LACP.
-
-```yaml
-apiVersion: wiring.githedgehog.com/v1beta1
-kind: Connection
-metadata:
- name: server-1--mclag--s5248-01--s5248-02
- namespace: default
-spec:
- mclag:
- links: # Defines multiple links between a single server and a pair of switches
- - server:
- port: server-1/enp2s1
- switch:
- port: s5248-01/E1/1
- - server:
- port: server-1/enp2s2
- switch:
- port: s5248-02/E1/1
-```
-
### ESLAG
-ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches
-should belong to the same redundancy group with type `eslag`, but contrary to the MCLAG case, no other configuration is
-required. The server interfaces should be configured for 802.3ad LACP.
+ESLAG (EVPN Multi-Homing) server connections are used to connect servers to 2-4 switches using multiple ports.
+This is the recommended approach for multi-homing. Switches should belong to the same redundancy group with
+type `eslag`. The server interfaces should be configured for 802.3ad LACP.
```yaml
apiVersion: wiring.githedgehog.com/v1beta1
@@ -98,7 +72,7 @@ metadata:
namespace: default
spec:
eslag:
- links: # Defines multiple links between a single server and a 2-4 switches
+ links: # Defines multiple links between a single server and 2-4 switches
- server:
port: server-1/enp2s1
switch:
@@ -172,40 +146,6 @@ spec:
port: s5248-04/E1/56
```
-### MCLAG-Domain
-
-MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be
-configured as an MCLAG, pair which requires them to be in a single redundancy group of type `mclag` and Connection with
-type `mclag-domain` between them. MCLAG switches should also have the same `spec.ASN` and `spec.VTEPIP`.
-
-```yaml
-apiVersion: wiring.githedgehog.com/v1beta1
-kind: Connection
-metadata:
- name: s5248-01--mclag-domain--s5248-02
- namespace: default
-spec:
- mclagDomain:
- peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link
- - switch1:
- port: s5248-01/E1/12
- switch2:
- port: s5248-02/E1/12
- - switch1:
- port: s5248-01/E1/13
- switch2:
- port: s5248-02/E1/13
- sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link
- - switch1:
- port: s5248-01/E1/14
- switch2:
- port: s5248-02/E1/14
- - switch1:
- port: s5248-01/E1/15
- switch2:
- port: s5248-02/E1/15
-```
-
## Connecting Fabric to the outside world
Connections in this section provide connectivity to the outside world. For example, they can be connections to the
@@ -290,3 +230,71 @@ spec:
ip: 172.30.128.8/31
port: spine-01/E1/5
```
+
+## Deprecated Connections
+
+### MCLAG
+
+!!! warning "Deprecated"
+ MCLAG is being deprecated. Use [ESLAG](#eslag) for multi-homing instead.
+
+MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing).
+Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type `mclag`
+and a Connection with type `mclag-domain` between them. MCLAG switches should also have the same `spec.ASN` and
+`spec.VTEPIP`. The server interfaces should be configured for 802.3ad LACP.
+
+```yaml
+apiVersion: wiring.githedgehog.com/v1beta1
+kind: Connection
+metadata:
+ name: server-1--mclag--s5248-01--s5248-02
+ namespace: default
+spec:
+ mclag:
+ links: # Defines multiple links between a single server and a pair of switches
+ - server:
+ port: server-1/enp2s1
+ switch:
+ port: s5248-01/E1/1
+ - server:
+ port: server-1/enp2s2
+ switch:
+ port: s5248-02/E1/1
+```
+
+### MCLAG-Domain
+
+!!! warning "Deprecated"
+ MCLAG is being deprecated. Use [ESLAG](#eslag) for multi-homing instead.
+
+MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be
+configured as an MCLAG, pair which requires them to be in a single redundancy group of type `mclag` and Connection with
+type `mclag-domain` between them. MCLAG switches should also have the same `spec.ASN` and `spec.VTEPIP`.
+
+```yaml
+apiVersion: wiring.githedgehog.com/v1beta1
+kind: Connection
+metadata:
+ name: s5248-01--mclag-domain--s5248-02
+ namespace: default
+spec:
+ mclagDomain:
+ peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link
+ - switch1:
+ port: s5248-01/E1/12
+ switch2:
+ port: s5248-02/E1/12
+ - switch1:
+ port: s5248-01/E1/13
+ switch2:
+ port: s5248-02/E1/13
+ sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link
+ - switch1:
+ port: s5248-01/E1/14
+ switch2:
+ port: s5248-02/E1/14
+ - switch1:
+ port: s5248-01/E1/15
+ switch2:
+ port: s5248-02/E1/15
+```
diff --git a/docs/user-guide/devices.md b/docs/user-guide/devices.md
index 4a2f4fbc..662dcda7 100644
--- a/docs/user-guide/devices.md
+++ b/docs/user-guide/devices.md
@@ -43,7 +43,7 @@ spec:
- some-group
redundancy: # Optional field to define that switch belongs to the redundancy group
group: eslag-1 # Name of the redundancy group
- type: eslag # Type of the redundancy group, one of mclag or eslag
+ type: eslag # Type of the redundancy group, should be eslag
enableAllPorts: true # Optional field to enable all ports on the switch by default
portAutoNegs: # Used for rj45 copper ports, and 800G ports for link conditioning
E1/18: true
@@ -139,17 +139,14 @@ spec: {}
## Redundancy Groups
Redundancy groups are used to define the redundancy between switches. It's a regular `SwitchGroup` used by multiple
-switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy
-group.
+switches. ESLAG (EVPN Multi-Homing) is the recommended approach and supports up to 4 switches. A switch can only
+belong to a single redundancy group.
-MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy
-groups can be used in the fabric simultaneously.
+Connections with type `eslag` are used to define the server connections to switches. They are only supported if the
+switch belongs to a redundancy group with the corresponding type.
-Connections with types `mclag` and `eslag` are used to define the servers connections to switches. They are only
-supported if the switch belongs to a redundancy group with the corresponding type.
-
-In order to define a MCLAG or ESLAG redundancy group, you need to create a `SwitchGroup` object and assign it to the
-switches using the `redundancy` field.
+To define an ESLAG redundancy group, create a `SwitchGroup` object and assign it to the switches using the
+`redundancy` field.
Example of switch configured for ESLAG:
@@ -174,31 +171,8 @@ spec:
...
```
-And example of switch configured for MCLAG:
-
-```{.yaml .annotate linenums="1" title="MCLAG-switchgroup.yaml"}
-apiVersion: wiring.githedgehog.com/v1beta1
-kind: SwitchGroup
-metadata:
- name: mclag-1
- namespace: default
-spec: {}
----
-apiVersion: wiring.githedgehog.com/v1beta1
-kind: Switch
-metadata:
- name: s5248-01
- namespace: default
-spec:
- ...
- redundancy:
- group: mclag-1
- type: mclag
- ...
-```
-
-In case of MCLAG it's required to have a special connection with type `mclag-domain` that defines the peer and session
-links between switches. For more details, see [Connections](./connections.md).
+!!! warning "MCLAG Deprecated"
+ MCLAG is being deprecated. Use ESLAG for multi-homing instead.
## Servers
diff --git a/docs/user-guide/host-settings.md b/docs/user-guide/host-settings.md
index 837b82c6..5a7895da 100644
--- a/docs/user-guide/host-settings.md
+++ b/docs/user-guide/host-settings.md
@@ -10,19 +10,12 @@ additional details on options and behavior, consult the [kernel bonding driver][
[nmanager]: https://networkmanager.dev/docs/admins/
[netplan]: https://documentation.ubuntu.com/server/explanation/networking/configuring-networks/index.html
-## MCLAG / ESLAG
+## Multi-homing
-The multi-chassis LAG architecture is a way to provide device redundancy
-in a network architecture. At the physical layer, an MCLAG topology is a single
-server connected to two different switches and, those switches are directly connected
-to each other in addition to being connected to the rest of the fabric.
-
-ESLAG is a similar technology to MCLAG, with the beneficial difference that the
-switches do not need to be directly connected to each other. There can be up to 4
-switches in an ESLAG group, whereas MCLAG is always two switches.
-
-Regardless of whether MCLAG or ESLAG is chosen, the host must configure its two
-(or more) ports using LACP (IEEE 802.3ad).
+ESLAG (EVPN Multi-Homing) is the recommended way to provide device redundancy.
+A server connects to multiple switches (up to 4) without requiring the switches
+to be directly connected to each other. The host must configure its ports using
+LACP (IEEE 802.3ad).
### Server Settings
diff --git a/docs/user-guide/shrink-expand.md b/docs/user-guide/shrink-expand.md
index 32c601b0..546832f6 100644
--- a/docs/user-guide/shrink-expand.md
+++ b/docs/user-guide/shrink-expand.md
@@ -18,7 +18,7 @@ a spine is being added, it shares the same ASN as the existing spines. For an
IPv4 address increment the largest IP by one, keep the same netmask.
!!! note
- If the`Switch` will be used in `ESLAG` or `MCLAG` groups, appropriate groups should exist. Redundancy groups should
+ If the`Switch` will be used in `ESLAG` or `MCLAG` (deprecated) groups, appropriate groups should exist. Redundancy groups should
be specified in the `Switch` object before creation.
#### Expanding Example
diff --git a/docs/vlab/demo.md b/docs/vlab/demo.md
index 0cfbd835..f820dd27 100644
--- a/docs/vlab/demo.md
+++ b/docs/vlab/demo.md
@@ -3,91 +3,143 @@
## Goals
The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers.
-Examples are based on the default VLAB topology.
+Examples are based on the VLAB topology described in the [Running VLAB](running.md) section.
-You can find instructions on how to setup VLAB in the [Overview](overview.md) and [Running VLAB](running.md) sections.
+## VLAB Topology
-## Default topology
+### Spine-Leaf
-The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf.
-
-For more details on customizing topologies see the [Running VLAB](running.md) section.
-
-In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:
+The topology contains 2 spines, 2 ESLAG leaves, 1 orphan leaf, and a gateway as shown below:
```mermaid
graph TD
- S1([Spine 1])
- S2([Spine 2])
-
- L1([MCLAG Leaf 1])
- L2([MCLAG Leaf 2])
- L3([ESLAG Leaf 3])
- L4([ESLAG Leaf 4])
- L5([Leaf 5])
-
- L1 & L2 & L5 & L3 & L4 --> S1 & S2
-```
-
-As well as the following test servers, as above Control Node connections are omitted:
-
-```mermaid
-graph TD
- S1([Spine 1])
- S2([Spine 2])
- L1([MCLAG Leaf 1])
- L2([MCLAG Leaf 2])
- L3([ESLAG Leaf 3])
- L4([ESLAG Leaf 4])
- L5([Leaf 5])
-
- TS1[Server 1]
- TS2[Server 2]
- TS3[Server 3]
- TS4[Server 4]
- TS5[Server 5]
- TS6[Server 6]
- TS7[Server 7]
- TS8[Server 8]
- TS9[Server 9]
- TS10[Server 10]
-
- subgraph MCLAG
- L1
- L2
+%% Style definitions
+classDef gateway fill:#FFF2CC,stroke:#999,stroke-width:1px,color:#000
+classDef spine fill:#F8CECC,stroke:#B85450,stroke-width:1px,color:#000
+classDef leaf fill:#DAE8FC,stroke:#6C8EBF,stroke-width:1px,color:#000
+classDef server fill:#D5E8D4,stroke:#82B366,stroke-width:1px,color:#000
+classDef mclag fill:#F0F8FF,stroke:#6C8EBF,stroke-width:1px,color:#000
+classDef eslag fill:#FFF8E8,stroke:#CC9900,stroke-width:1px,color:#000
+classDef external fill:#FFCC99,stroke:#D79B00,stroke-width:1px,color:#000
+classDef hidden fill:none,stroke:none
+classDef legendBox fill:white,stroke:#999,stroke-width:1px,color:#000
+
+%% Network diagram
+subgraph Gateways[" "]
+ direction LR
+ Gateway_1["gateway-1"]
+end
+
+subgraph Spines[" "]
+ direction LR
+ subgraph Spine_01_Group [" "]
+ direction TB
+ Spine_01["spine-01
spine"]
end
- TS3 --> L1
- TS1 --> L1
- TS1 --> L2
-
- TS2 --> L1
- TS2 --> L2
-
- TS4 --> L2
-
- subgraph ESLAG
- L3
- L4
+ subgraph Spine_02_Group [" "]
+ direction TB
+ Spine_02["spine-02
spine"]
+ end
+end
+
+subgraph Leaves[" "]
+ direction LR
+ subgraph Eslag_1 ["eslag-1"]
+ direction LR
+ Leaf_01["leaf-01
server-leaf"]
+ Leaf_02["leaf-02
server-leaf"]
end
- TS7 --> L3
- TS5 --> L3
- TS5 --> L4
- TS6 --> L3
- TS6 --> L4
-
- TS8 --> L4
- TS9 --> L5
- TS10 --> L5
-
- L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2
+ Leaf_03["leaf-03
server-leaf"]
+end
+
+subgraph Servers[" "]
+ direction TB
+ Server_03["server-03"]
+ Server_01["server-01"]
+ Server_02["server-02"]
+ Server_04["server-04"]
+ Server_05["server-05"]
+ Server_06["server-06"]
+end
+
+%% Connections
+
+%% Gateway connections
+Gateway_1 ---|"enp2s2↔E1/7"| Spine_02
+Gateway_1 ---|"enp2s1↔E1/7"| Spine_01
+
+%% Spine_01 -> Leaves
+Spine_01 ---|"E1/4↔E1/1
E1/5↔E1/2"| Leaf_01
+Spine_01 ---|"E1/6↔E1/4
E1/5↔E1/3"| Leaf_02
+Spine_01 ---|"E1/4↔E1/5
E1/5↔E1/6"| Leaf_03
+
+%% Spine_02 -> Leaves
+Spine_02 ---|"E1/7↔E1/3
E1/8↔E1/4"| Leaf_02
+Spine_02 ---|"E1/6↔E1/1
E1/7↔E1/2"| Leaf_01
+Spine_02 ---|"E1/6↔E1/5
E1/7↔E1/6"| Leaf_03
+
+%% Leaves -> Servers
+Leaf_01 ---|"enp2s1↔E1/2"| Server_02
+Leaf_01 ---|"enp2s1↔E1/1"| Server_01
+Leaf_01 ---|"enp2s1↔E1/3"| Server_03
+
+Leaf_02 ---|"enp2s1↔E1/3
enp2s2↔E1/4"| Server_04
+Leaf_02 ---|"enp2s2↔E1/2"| Server_02
+Leaf_02 ---|"enp2s2↔E1/1"| Server_01
+
+Leaf_03 ---|"enp2s1↔E1/2
enp2s2↔E1/3"| Server_06
+Leaf_03 ---|"enp2s1↔E1/1"| Server_05
+
+%% Mesh connections
+
+%% External connections
+
+subgraph Legend["Network Connection Types"]
+ direction LR
+ %% Create invisible nodes for the start and end of each line
+ L1(( )) --- |"Fabric Links"| L2(( ))
+ L5(( )) --- |"Bundled Server Links (x2)"| L6(( ))
+ L7(( )) --- |"Unbundled Server Links"| L8(( ))
+ L9(( )) --- |"ESLAG Server Links"| L10(( ))
+ L11(( )) --- |"Gateway Links"| L12(( ))
+ P1(( )) --- |"Label Notation: Downstream ↔ Upstream"| P2(( ))
+end
+
+class Gateway_1 gateway
+class Spine_01,Spine_02 spine
+class Leaf_01,Leaf_02,Leaf_03 leaf
+class Server_03,Server_01,Server_02,Server_04,Server_05,Server_06 server
+class Eslag_1 eslag
+class L1,L2,L3,L4,L5,L6,L7,L8,L9,L10,L11,L12,P1,P2 hidden
+class Legend legendBox
+linkStyle default stroke:#666,stroke-width:2px
+linkStyle 0,1 stroke:#CC9900,stroke-width:2px
+linkStyle 2,3,4,5,6,7 stroke:#CC3333,stroke-width:4px
+linkStyle 11,14 stroke:#66CC66,stroke-width:4px
+linkStyle 8,9,12,13 stroke:#CC9900,stroke-width:4px,stroke-dasharray:5 5
+linkStyle 10,15 stroke:#999999,stroke-width:2px
+linkStyle 16 stroke:#B85450,stroke-width:2px
+linkStyle 17 stroke:#82B366,stroke-width:2px
+linkStyle 18 stroke:#000000,stroke-width:2px
+linkStyle 19 stroke:#CC9900,stroke-width:2px,stroke-dasharray:5 5
+linkStyle 20 stroke:#CC9900,stroke-width:2px
+linkStyle 21 stroke:#FFFFFF
+
+%% Make subgraph containers invisible
+style Gateways fill:none,stroke:none
+style Spines fill:none,stroke:none
+style Leaves fill:none,stroke:none
+style Servers fill:none,stroke:none
+style Spine_01_Group fill:none,stroke:none
+style Spine_02_Group fill:none,stroke:none
```
## Utility based VPC creation
### Setup VPCs
-`hhfab vlab` includes a utility to create VPCs in vlab. This utility is a `hhfab vlab` sub-command, `hhfab vlab setup-vpcs`.
+`hhfab` includes a utility to create VPCs in vlab. This utility is a `hhfab vlab` sub-command, `hhfab vlab setup-vpcs`.
```
NAME:
@@ -117,11 +169,11 @@ OPTIONS:
--brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
--cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
--verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]
```
### Setup Peering
-`hhfab vlab` includes a utility to create VPC peerings in VLAB. This utility is a `hhfab vlab` sub-command, `hhfab vlab setup-peerings`.
+`hhfab` includes a utility to create VPC peerings in VLAB. This utility is a `hhfab vlab` sub-command, `hhfab vlab setup-peerings`.
```
NAME:
@@ -144,7 +196,6 @@ USAGE:
VPC Peerings:
1+2 -- VPC peering between vpc-01 and vpc-02
- 1+2:gw -- same as above but using gateway peering, only valid if gateway is present
demo-1+demo-2 -- VPC peering between vpc-demo-1 and vpc-demo-2
1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
@@ -153,7 +204,6 @@ USAGE:
External Peerings:
1~as5835 -- external peering for vpc-01 with External as5835
- 1~as5835:gw -- same as above but using gateway peering, only valid if gateway is present
1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
default subnet and any route from external
1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
@@ -171,11 +221,11 @@ OPTIONS:
--brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
--cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
--verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]
```
### Test Connectivity
-`hhfab vlab` includes a utility to test connectivity between servers inside VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab test-connectivity`.
+`hhfab` includes a utility to test connectivity between servers inside VLAB. This utility is a `hhfab vlab` sub-command. `hhfab vlab test-connectivity`.
```
NAME:
@@ -185,14 +235,17 @@ USAGE:
hhfab vlab test-connectivity [command options]
OPTIONS:
+ --all-servers, --all requires all servers to be attached to a VPC (default: false)
--curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
--destination value, --dst value [ --destination value, --dst value ] server to use as destination for connectivity tests (default: all servers)
+ --dscp value DSCP value to use for iperf3 tests (0 to disable DSCP) (default: 0)
--help, -h show help
--iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
--iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 8200)
--name value, -n value name of the VM or HW to access
--pings value number of pings to send between each pair of servers (0 to disable) (default: 5)
--source value, --src value [ --source value, --src value ] server to use as source for connectivity tests (default: all servers)
+ --tos value TOS value to use for iperf3 tests (0 to disable TOS) (default: 0)
--wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true)
Global options:
@@ -200,7 +253,8 @@ OPTIONS:
--brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
--cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
--verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]
+
```
## Manual VPC creation
### Creating and attaching VPCs
@@ -211,24 +265,24 @@ server enabled with its optional IP address range start defined, and to attach t
```
core@control-1 ~ $ kubectl get conn | grep server
-server-01--mclag--leaf-01--leaf-02 mclag 5h13m
-server-02--mclag--leaf-01--leaf-02 mclag 5h13m
-server-03--unbundled--leaf-01 unbundled 5h13m
-server-04--bundled--leaf-02 bundled 5h13m
-server-05--unbundled--leaf-03 unbundled 5h13m
-server-06--bundled--leaf-03 bundled 5h13m
+server-01--eslag--leaf-01--leaf-02 eslag 44h
+server-02--eslag--leaf-01--leaf-02 eslag 44h
+server-03--unbundled--leaf-01 unbundled 44h
+server-04--bundled--leaf-02 bundled 44h
+server-05--unbundled--leaf-03 unbundled 44h
+server-06--bundled--leaf-03 bundled 44h
core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10
-06:48:46 INF VPC created name=vpc-1
+13:46:58 INF VPC created name=vpc-1
core@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10
-06:49:04 INF VPC created name=vpc-2
+13:47:14 INF VPC created name=vpc-2
-core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02
-06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02
+core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--eslag--leaf-01--leaf-02
+13:47:52 INF VPCAttachment created name=vpc-1--default--server-01--eslag--leaf-01--leaf-02
-core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02
-06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02
+core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--eslag--leaf-01--leaf-02
+13:48:07 INF VPCAttachment created name=vpc-2--default--server-02--eslag--leaf-01--leaf-02
```
The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is `10.0.0.0/16`:
@@ -236,7 +290,7 @@ The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is
```
core@control-1 ~ $ kubectl get ipns
NAME SUBNETS AGE
-default ["10.0.0.0/16"] 5h14m
+default ["10.0.0.0/16"] 44h
```
After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested
@@ -244,12 +298,12 @@ configuration was applied to the switches:
```
core@control-1 ~ $ kubectl get agents
-NAME ROLE DESCR APPLIED APPLIEDG CURRENTG VERSION
-leaf-01 server-leaf VS-01 MCLAG 1 2m2s 5 5 v0.23.0
-leaf-02 server-leaf VS-02 MCLAG 1 2m2s 4 4 v0.23.0
-leaf-03 server-leaf VS-03 112s 5 5 v0.23.0
-spine-01 spine VS-04 16m 3 3 v0.23.0
-spine-02 spine VS-05 18m 4 4 v0.23.0
+NAME ROLE DESCR APPLIED APPLIEDG CURRENTG VERSION REBOOTREQ
+leaf-01 server-leaf VS-01 ESLAG 1 36m 5 5 v0.96.2
+leaf-02 server-leaf VS-02 ESLAG 1 46m 5 5 v0.96.2
+leaf-03 server-leaf VS-03 21m 3 3 v0.96.2
+spine-01 spine VS-04 7m27s 1 1 v0.96.2
+spine-02 spine VS-05 37m 1 1 v0.96.2
```
In this example, the values in columns `APPLIEDG` and `CURRENTG` are equal which means that the requested configuration
@@ -258,7 +312,7 @@ has been applied.
### Setting up networking on test servers
You can use `hhfab vlab ssh` on the host to SSH into the test servers and configure networking there. For example, for
-both `server-01` (MCLAG attached to both `leaf-01` and `leaf-02`) we need to configure a bond with a VLAN on top of it
+both `server-01` (ESLAG attached to both `leaf-01` and `leaf-02`) we need to configure a bond with a VLAN on top of it
and for the `server-05` (single-homed unbundled attached to `leaf-03`) we need to configure just a VLAN and they both
will get an IP address from the DHCP server. You can use the `ip` command to configure networking on the servers or use
the little helper pre-installed by Fabricator on test servers, `hhnet`.
@@ -269,21 +323,21 @@ For `server-01`:
core@server-01 ~ $ hhnet cleanup
core@server-01 ~ $ hhnet bond 1001 layer2+3 enp2s1 enp2s2
10.0.1.10/24
-core@server-01 ~ $ ip a
+core@server-01 ~ $ ip address show
...
-3: enp2s1: mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01
-4: enp2s2: mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02
-6: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
- inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
+3: enp2s1: mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
+ link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
+4: enp2s2: mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
+ link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
+8: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff
+ inet6 fe80::3c2e:1eff:feef:e3c8/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
-7: bond0.1001@bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
+9: bond0.1001@bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
- valid_lft 86396sec preferred_lft 86396sec
- inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
+ valid_lft 3580sec preferred_lft 3580sec
+ inet6 fe80::3c2e:1eff:feef:e3c8/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
@@ -293,21 +347,21 @@ And for `server-02`:
core@server-02 ~ $ hhnet cleanup
core@server-02 ~ $ hhnet bond 1002 layer2+3 enp2s1 enp2s2
10.0.2.10/24
-core@server-02 ~ $ ip a
+core@server-02 ~ $ ip address show
...
-3: enp2s1: mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
-4: enp2s2: mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
+3: enp2s1: mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
+ link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:03:01
+4: enp2s2: mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
+ link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:03:02
8: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
- inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
+ link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff
+ inet6 fe80::6c27:d4ff:fee2:6bf7/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
9: bond0.1002@bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
+ link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
- valid_lft 86185sec preferred_lft 86185sec
- inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
+ valid_lft 3594sec preferred_lft 3594sec
+ inet6 fe80::6c27:d4ff:fee2:6bf7/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
@@ -347,10 +401,10 @@ To enable connectivity between the VPCs, peer them using `kubectl fabric vpc pee
```
core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2
-07:04:58 INF VPCPeering created name=vpc-1--vpc-2
+23:43:21 INF VPCPeering created name=vpc-1--vpc-2
```
-Make sure to wait until the peering is applied to the switches using `kubectl get agents` command. After that, you can
+Make sure to wait until the peering is applied to the switches using `kubectl get agents` command. After waiting that columns `APPLIEDG` and `CURRENTG` are equal, you can
test connectivity between the servers again:
```
@@ -479,7 +533,7 @@ At that point you can setup networking on `server-03` the same as you did for `s
### Creating simple VPC peering via the gateway
-If gateway was [enabled](running.md#gateway) for your VLAB topology, you also have the option of peering VPCs
+When gateway is [enabled](running.md#gateway) in your VLAB topology, you also have the option of peering VPCs
via the gateway. One way of doing so is using the [hhfab helpers](#setup-peering). For example, assuming vpc-1
and vpc-2 were previously created, you can run:
diff --git a/docs/vlab/overview.md b/docs/vlab/overview.md
index 43202236..a41d011c 100644
--- a/docs/vlab/overview.md
+++ b/docs/vlab/overview.md
@@ -26,8 +26,9 @@ The following packages needs to be installed: `qemu-kvm socat`. Docker is also r
into the OCI registry.
By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves, and 1 "orphan" leaf, i.e. with
-no redundancy scheme. Alternatively, users can run the mesh topology, which directly connects as few as 2 leaves
-to each other, without the need for a spine switch in between.
+no redundancy scheme. For ESLAG-only topologies (recommended), use `--mclag-leafs-count=0`. Alternatively, users can
+run the mesh topology, which directly connects as few as 2 leaves to each other, without the need for a spine switch
+in between.
You can calculate the system requirements based on the allocated resources to the VMs using the following table:
diff --git a/docs/vlab/running.md b/docs/vlab/running.md
index d7c90d1c..08528127 100644
--- a/docs/vlab/running.md
+++ b/docs/vlab/running.md
@@ -5,158 +5,157 @@ before running VLAB.
## Initialize VLAB
-First, initialize Fabricator by running `hhfab init --dev`. This command creates the `fab.yaml` file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of `hhfab init --help`.
+First, initialize Fabricator by running `hhfab init --dev --gw`. This command creates the `fab.yaml` file, which is the main configuration file for the fabric. The `--gw` flag enables the gateway. This command supports several customization options that are listed in the output of `hhfab init --help`.
```console
-ubuntu@docs:~$ hhfab init --dev
-11:26:52 INF Hedgehog Fabricator version=v0.41.3
-11:26:52 INF Generated initial config
-11:26:52 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml
-11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include
+ubuntu@docs:~$ hhfab init --dev --gw
+10:26:45 INF Hedgehog Fabricator version=v0.43.1
+10:26:45 INF Generated initial config
+10:26:45 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml
+10:26:45 INF Include wiring (fabric/gateway) files (.yaml) or adjust imported ones dir=include
```
## VLAB Topology
### Spine-Leaf
-By default, `hhfab vlab gen` creates 2 spines, 2 MCLAG leaves, 2 ESLAG leaves, and 1 orphan (non-LAG) leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links. To generate the preceding topology, `hhfab vlab gen`. You can also configure the number of spines, leaves, connections, and so on. For example, flags `--spines-count` and `--mclag-leafs-count` allow you to set the number of spines and MCLAG leaves, respectively. For complete options, `hhfab vlab gen -h`.
+To generate a spine-leaf topology, use `hhfab vlab gen`. The following generates an ESLAG (EVPN Multi-Homing) topology:
```console
-ubuntu@docs:~$ hhfab vlab gen
-11:37:33 INF Hedgehog Fabricator version=v0.41.1
-11:37:33 INF Building VLAB wiring diagram fabricMode=spine-leaf
-11:37:33 INF >>> spinesCount=2 fabricLinksCount=2 meshLinksCount=0
-11:37:33 INF >>> eslagLeafGroups=2
-11:37:33 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2
-11:37:33 INF >>> orphanLeafsCount=1
-11:37:33 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1
-11:37:33 INF Generated wiring file name=vlab.generated.yaml
+ubuntu@docs:~$ hhfab vlab gen --mclag-leafs-count 0 --eslag-leaf-groups 2
+10:26:46 INF Hedgehog Fabricator version=v0.43.1
+10:26:46 INF Building VLAB wiring diagram fabricMode=spine-leaf
+10:26:46 INF >>> spinesCount=2 fabricLinksCount=2 meshLinksCount=0
+10:26:46 INF >>> eslagLeafGroups=2
+10:26:46 INF >>> gatewayUplinks=2 gatewayDriver=kernel
+10:26:46 INF >>> mclagLeafsCount=0 mclagSessionLinks=0 mclagPeerLinks=0
+10:26:46 INF >>> orphanLeafsCount=1
+10:26:46 INF >>> mclagServers=0 eslagServers=2 unbundledServers=1 bundledServers=1
+10:26:46 INF >>> externalCount=0 externalMclagConnCount=0 externalEslagConnCount=0 externalOrphanConnCount=0
+10:26:46 INF Generated wiring file name=vlab.generated.yaml
```
-The default spine-leaf topology with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 orphan leaf is shown below:
+You can customize the topology with flags like `--spines-count` and `--eslag-leaf-groups`. For complete options, run `hhfab vlab gen -h`.
+
+The topology with 2 spines, 2 ESLAG leaves, 1 orphan leaf, and a gateway is shown below:
```mermaid
graph TD
%% Style definitions
+classDef gateway fill:#FFF2CC,stroke:#999,stroke-width:1px,color:#000
classDef spine fill:#F8CECC,stroke:#B85450,stroke-width:1px,color:#000
classDef leaf fill:#DAE8FC,stroke:#6C8EBF,stroke-width:1px,color:#000
classDef server fill:#D5E8D4,stroke:#82B366,stroke-width:1px,color:#000
classDef mclag fill:#F0F8FF,stroke:#6C8EBF,stroke-width:1px,color:#000
classDef eslag fill:#FFF8E8,stroke:#CC9900,stroke-width:1px,color:#000
+classDef external fill:#FFCC99,stroke:#D79B00,stroke-width:1px,color:#000
classDef hidden fill:none,stroke:none
classDef legendBox fill:white,stroke:#999,stroke-width:1px,color:#000
%% Network diagram
-subgraph Spines[ ]
- direction LR
- subgraph Spine_01_Group [ ]
- direction TB
- Spine_01["spine-01
spine"]
- end
- subgraph Spine_02_Group [ ]
- direction TB
- Spine_02["spine-02
spine"]
- end
+subgraph Gateways[" "]
+ direction LR
+ Gateway_1["gateway-1"]
end
-subgraph Leaves[ ]
- direction LR
- subgraph MCLAG [MCLAG]
- direction LR
- Leaf_01["leaf-01
server-leaf"]
- Leaf_02["leaf-02
server-leaf"]
- end
+subgraph Spines[" "]
+ direction LR
+ subgraph Spine_01_Group [" "]
+ direction TB
+ Spine_01["spine-01
spine"]
+ end
+ subgraph Spine_02_Group [" "]
+ direction TB
+ Spine_02["spine-02
spine"]
+ end
+end
- subgraph ESLAG [ESLAG]
- direction LR
- Leaf_03["leaf-03
server-leaf"]
- Leaf_04["leaf-04
server-leaf"]
- end
+subgraph Leaves[" "]
+ direction LR
+ subgraph Eslag_1 ["eslag-1"]
+ direction LR
+ Leaf_01["leaf-01
server-leaf"]
+ Leaf_02["leaf-02
server-leaf"]
+ end
- Leaf_05["leaf-05
server-leaf"]
+ Leaf_03["leaf-03
server-leaf"]
end
-subgraph Servers[ ]
- direction TB
- Server_03["server-03"]
- Server_01["server-01"]
- Server_02["server-02"]
- Server_04["server-04"]
- Server_07["server-07"]
- Server_05["server-05"]
- Server_06["server-06"]
- Server_08["server-08"]
- Server_09["server-09"]
- Server_10["server-10"]
+subgraph Servers[" "]
+ direction TB
+ Server_03["server-03"]
+ Server_01["server-01"]
+ Server_02["server-02"]
+ Server_04["server-04"]
+ Server_05["server-05"]
+ Server_06["server-06"]
end
%% Connections
+%% Gateway connections
+Gateway_1 ---|"enp2s2↔E1/7"| Spine_02
+Gateway_1 ---|"enp2s1↔E1/7"| Spine_01
+
%% Spine_01 -> Leaves
-Spine_01 ---|"E1/8↔E1/1
E1/9↔E1/2"| Leaf_01
-Spine_01 ---|"E1/10↔E1/4
E1/9↔E1/3"| Leaf_02
+Spine_01 ---|"E1/4↔E1/1
E1/5↔E1/2"| Leaf_01
+Spine_01 ---|"E1/6↔E1/4
E1/5↔E1/3"| Leaf_02
Spine_01 ---|"E1/4↔E1/5
E1/5↔E1/6"| Leaf_03
-Spine_01 ---|"E1/4↔E1/9
E1/5↔E1/10"| Leaf_05
-Spine_01 ---|"E1/5↔E1/7
E1/6↔E1/8"| Leaf_04
%% Spine_02 -> Leaves
-Spine_02 ---|"E1/11↔E1/3
E1/12↔E1/4"| Leaf_02
-Spine_02 ---|"E1/7↔E1/7
E1/8↔E1/8"| Leaf_04
-Spine_02 ---|"E1/11↔E1/2
E1/10↔E1/1"| Leaf_01
+Spine_02 ---|"E1/7↔E1/3
E1/8↔E1/4"| Leaf_02
+Spine_02 ---|"E1/6↔E1/1
E1/7↔E1/2"| Leaf_01
Spine_02 ---|"E1/6↔E1/5
E1/7↔E1/6"| Leaf_03
-Spine_02 ---|"E1/7↔E1/10
E1/6↔E1/9"| Leaf_05
%% Leaves -> Servers
-Leaf_01 ---|"enp2s1↔E1/6"| Server_02
-Leaf_01 ---|"enp2s1↔E1/7"| Server_03
-Leaf_01 ---|"enp2s1↔E1/5"| Server_01
+Leaf_01 ---|"enp2s1↔E1/2"| Server_02
+Leaf_01 ---|"enp2s1↔E1/1"| Server_01
+Leaf_01 ---|"enp2s1↔E1/3"| Server_03
-Leaf_02 ---|"enp2s2↔E1/5"| Server_01
-Leaf_02 ---|"enp2s2↔E1/6"| Server_02
-Leaf_02 ---|"enp2s1↔E1/7
enp2s2↔E1/8"| Server_04
+Leaf_02 ---|"enp2s1↔E1/3
enp2s2↔E1/4"| Server_04
+Leaf_02 ---|"enp2s2↔E1/2"| Server_02
+Leaf_02 ---|"enp2s2↔E1/1"| Server_01
+Leaf_03 ---|"enp2s1↔E1/2
enp2s2↔E1/3"| Server_06
Leaf_03 ---|"enp2s1↔E1/1"| Server_05
-Leaf_03 ---|"enp2s1↔E1/2"| Server_06
-Leaf_03 ---|"enp2s1↔E1/3"| Server_07
-Leaf_04 ---|"enp2s2↔E1/2"| Server_06
-Leaf_04 ---|"enp2s1↔E1/3
enp2s2↔E1/4"| Server_08
-Leaf_04 ---|"enp2s2↔E1/1"| Server_05
+%% Mesh connections
-Leaf_05 ---|"enp2s1↔E1/1"| Server_09
-Leaf_05 ---|"enp2s1↔E1/2
enp2s2↔E1/3"| Server_10
+%% External connections
subgraph Legend["Network Connection Types"]
- direction LR
-
- %% Create invisible nodes for the start and end of each line
- L1(( )) --- |"Fabric Links"| L2(( ))
- L3(( )) --- |"MCLAG Server Links"| L4(( ))
- L5(( )) --- |"Bundled Server Links"| L6(( ))
- L7(( )) --- |"Unbundled Server Links"| L8(( ))
- L9(( )) --- |"ESLAG Server Links"| L10(( ))
+ direction LR
+ %% Create invisible nodes for the start and end of each line
+ L1(( )) --- |"Fabric Links"| L2(( ))
+ L5(( )) --- |"Bundled Server Links (x2)"| L6(( ))
+ L7(( )) --- |"Unbundled Server Links"| L8(( ))
+ L9(( )) --- |"ESLAG Server Links"| L10(( ))
+ L11(( )) --- |"Gateway Links"| L12(( ))
+ P1(( )) --- |"Label Notation: Downstream ↔ Upstream"| P2(( ))
end
+class Gateway_1 gateway
class Spine_01,Spine_02 spine
-class Leaf_01,Leaf_02,Leaf_03,Leaf_04,Leaf_05 leaf
-class Server_03,Server_01,Server_02,Server_04,Server_07,Server_05,Server_06,Server_08,Server_09,Server_10 server
-class MCLAG mclag
-class ESLAG eslag
-class L1,L2,L3,L4,L5,L6,L7,L8,L9,L10 hidden
+class Leaf_01,Leaf_02,Leaf_03 leaf
+class Server_03,Server_01,Server_02,Server_04,Server_05,Server_06 server
+class Eslag_1 eslag
+class L1,L2,L3,L4,L5,L6,L7,L8,L9,L10,L11,L12,P1,P2 hidden
class Legend legendBox
linkStyle default stroke:#666,stroke-width:2px
-linkStyle 0,1,2,3,4,5,6,7,8,9 stroke:#CC3333,stroke-width:4px
-linkStyle 10,12,13,14 stroke:#99CCFF,stroke-width:4px,stroke-dasharray:5 5
-linkStyle 15,20,23 stroke:#66CC66,stroke-width:4px
-linkStyle 16,17,19,21 stroke:#CC9900,stroke-width:4px,stroke-dasharray:5 5
-linkStyle 11,18,22 stroke:#999999,stroke-width:2px
-linkStyle 24 stroke:#B85450,stroke-width:2px
-linkStyle 25 stroke:#6C8EBF,stroke-width:2px,stroke-dasharray:5 5
-linkStyle 26 stroke:#82B366,stroke-width:2px
-linkStyle 27 stroke:#000000,stroke-width:2px
-linkStyle 28 stroke:#CC9900,stroke-width:2px,stroke-dasharray:5 5
+linkStyle 0,1 stroke:#CC9900,stroke-width:2px
+linkStyle 2,3,4,5,6,7 stroke:#CC3333,stroke-width:4px
+linkStyle 11,14 stroke:#66CC66,stroke-width:4px
+linkStyle 8,9,12,13 stroke:#CC9900,stroke-width:4px,stroke-dasharray:5 5
+linkStyle 10,15 stroke:#999999,stroke-width:2px
+linkStyle 16 stroke:#B85450,stroke-width:2px
+linkStyle 17 stroke:#82B366,stroke-width:2px
+linkStyle 18 stroke:#000000,stroke-width:2px
+linkStyle 19 stroke:#CC9900,stroke-width:2px,stroke-dasharray:5 5
+linkStyle 20 stroke:#CC9900,stroke-width:2px
+linkStyle 21 stroke:#FFFFFF
%% Make subgraph containers invisible
+style Gateways fill:none,stroke:none
style Spines fill:none,stroke:none
style Leaves fill:none,stroke:none
style Servers fill:none,stroke:none
@@ -277,9 +276,8 @@ style Servers fill:none,stroke:none
### Gateway
-Gateway could be added by adding `--gateway` flag to the `hhfab init` command and it'll be automatically added connected
-to two spines in case of spine-leaf topology or two leafs in case of the mesh topology, number of uplinks could be
-controlled using flags on the `hhfab vlab gen` command.
+The gateway is enabled by adding the `--gw` flag to `hhfab init`. It connects to two spines in spine-leaf topology
+or two leaves in mesh topology. The number of uplinks can be controlled using flags on `hhfab vlab gen`.
### Lightweight Spine-Leaf
A default spine-leaf topology in VLAB requests more CPU and RAM than is commonly available. The lightweight
@@ -388,55 +386,50 @@ prerequisites for running the VLAB.
## Build the Installer and Start VLAB
-To build and start the virtual machines, use `hhfab vlab up`. For successive runs, use the `--kill-stale` flag to ensure that any virtual machines from a previous run are gone. `hhfab vlab up` runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing `Ctrl + C`.
+To build and start the virtual machines, use `hhfab vlab up`. This command runs in the foreground and does not return, which allows you to stop all VLAB VMs by pressing `Ctrl + C`.
```console
ubuntu@docs:~$ hhfab vlab up
-11:48:22 INF Hedgehog Fabricator version=v0.36.1
-11:48:22 INF Wiring hydrated successfully mode=if-not-present
-11:48:22 INF VLAB config created file=vlab/config.yaml
-11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog
-11:48:22 INF Building installer control=control-1
-11:48:22 INF Adding recipe bin to installer control=control-1
-11:48:24 INF Adding k3s and tools to installer control=control-1
-11:48:25 INF Adding zot to installer control=control-1
-11:48:25 INF Adding cert-manager to installer control=control-1
-11:48:26 INF Adding config and included wiring to installer control=control-1
-11:48:26 INF Adding airgap artifacts to installer control=control-1
-11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1
-11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1
-11:48:46 INF Taps and bridge are ready count=8
-11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog
-11:48:46 INF Preparing new vm=control-1 type=control
-11:48:51 INF Preparing new vm=server-01 type=server
-11:48:52 INF Preparing new vm=server-02 type=server
-11:48:54 INF Preparing new vm=server-03 type=server
-11:48:55 INF Preparing new vm=server-04 type=server
-11:48:57 INF Preparing new vm=server-05 type=server
-11:48:58 INF Preparing new vm=server-06 type=server
-11:49:00 INF Preparing new vm=server-07 type=server
-11:49:01 INF Preparing new vm=server-08 type=server
-11:49:03 INF Preparing new vm=server-09 type=server
-11:49:04 INF Preparing new vm=server-10 type=server
-11:49:05 INF Preparing new vm=leaf-01 type=switch
-11:49:06 INF Preparing new vm=leaf-02 type=switch
-11:49:06 INF Preparing new vm=leaf-03 type=switch
-11:49:06 INF Preparing new vm=leaf-04 type=switch
-11:49:06 INF Preparing new vm=leaf-05 type=switch
-11:49:06 INF Preparing new vm=spine-01 type=switch
-11:49:06 INF Preparing new vm=spine-02 type=switch
-11:49:06 INF Starting VMs count=18 cpu="54 vCPUs" ram="49664 MB" disk="550 GB"
-11:49:59 INF Uploading control install vm=control-1 type=control
-11:53:39 INF Running control install vm=control-1 type=control
-11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.36.1 vm=control-1
-11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1
-12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1
-12:00:32 INF Control node is ready vm=control-1 type=control
-12:00:32 INF All VMs are ready
+17:25:31 INF Hedgehog Fabricator version=v0.43.1
+17:25:31 INF Wiring hydrated successfully mode=if-not-present
+17:25:31 INF VLAB config loaded file=vlab/config.yaml
+17:25:31 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog
+17:25:31 INF Building control node installers
+17:25:31 INF Building installer name=control-1 type=control mode=iso
+17:25:31 INF Adding recipe bin and config to installer name=control-1 type=control mode=iso
+17:25:33 INF Adding k3s and tools to installer name=control-1 type=control mode=iso
+17:25:33 INF Adding toolbox to installer name=control-1 type=control mode=iso
+17:25:34 INF Adding zot to installer name=control-1 type=control mode=iso
+17:25:34 INF Adding flatcar upgrade bin to installer name=control-1 type=control mode=iso
+17:25:34 INF Adding cert-manager to installer name=control-1 type=control mode=iso
+17:25:34 INF Adding bash-completion to installer name=control-1 type=control mode=iso control=control-1
+17:25:34 INF Adding config and wiring files to installer name=control-1 type=control mode=iso
+17:25:34 INF Adding CLIs to installer name=control-1 type=control mode=iso
+17:25:35 INF Building installer image, may take up to 5-10 minutes name=control-1 type=control mode=iso
+...
+17:25:48 INF Taps and bridge are ready count=7
+17:25:48 INF Preparing new vm=control-1 type=control
+17:26:28 INF Preparing new vm=gateway-1 type=gateway
+17:27:12 INF Preparing new vm=server-01 type=server
+17:27:14 INF Preparing new vm=server-02 type=server
+17:27:16 INF Preparing new vm=server-03 type=server
+17:27:17 INF Preparing new vm=server-04 type=server
+17:27:19 INF Preparing new vm=server-05 type=server
+17:27:21 INF Preparing new vm=server-06 type=server
+17:27:23 INF Preparing new vm=leaf-01 type=switch
+17:27:23 INF Preparing new vm=leaf-02 type=switch
+17:27:23 INF Preparing new vm=leaf-03 type=switch
+17:27:23 INF Preparing new vm=spine-01 type=switch
+17:27:23 INF Preparing new vm=spine-02 type=switch
+17:27:24 INF Starting VMs count=13 cpu="46 vCPUs" ram="42496 MB" disk="460 GB"
+...
+17:35:11 INF install(control-1): Jan 30 17:35:11 control-1 hhfab-recipe[1529]: Jan 30 17:35:11.024 INF Control node installation complete
+17:35:21 INF All VMs are ready
+17:35:21 INF All K8s nodes are ready
+17:35:21 INF VLAB is ready
```
-When the message `INF Control node is ready vm=control-1 type=control` from the installer's output means that the installer has finished. After this line
-has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting
-provisioned. See [Accessing the VLAB](#accessing-the-vlab).
+When the message `INF VLAB is ready` appears, the installer has finished. After this, you can get into the control
+node and other VMs to watch the Fabric coming up and switches getting provisioned. See [Accessing the VLAB](#accessing-the-vlab).
## Enable Outside connectivity from VLAB VMs
@@ -458,25 +451,20 @@ You can select device you want to access or pass the name using the `--vm` flag.
```console
ubuntu@docs:~$ hhfab vlab ssh
Use the arrow keys to navigate: ↓ ↑ → ← and / toggles search
-SSH to VM:
+Select target for ssh:
🦔 control-1
+ gateway-1
+ leaf-01
+ leaf-02
+ leaf-03
server-01
server-02
server-03
server-04
server-05
server-06
- leaf-01
- leaf-02
- leaf-03
spine-01
spine-02
-
------------ VM Details ------------
-ID: 0
-Name: control-1
-Ready: true
-Basedir: .hhfab/vlab-vms/control-1
```
### Default credentials
@@ -497,12 +485,12 @@ After the switches are provisioned, the command returns something like this:
```console
core@control-1 ~ $ kubectl get agents -o wide
-NAME ROLE DESCR HWSKU ASIC HEARTBEAT APPLIED APPLIEDG CURRENTG VERSION SOFTWARE ATTEMPT ATTEMPTG AGE
-leaf-01 server-leaf VS-01 MCLAG 1 DellEMC-S5248f-P-25G-DPB vs 30s 5m5s 4 4 v0.23.0 4.1.1-Enterprise_Base 5m5s 4 10m
-leaf-02 server-leaf VS-02 MCLAG 1 DellEMC-S5248f-P-25G-DPB vs 27s 3m30s 3 3 v0.23.0 4.1.1-Enterprise_Base 3m30s 3 10m
-leaf-03 server-leaf VS-03 DellEMC-S5248f-P-25G-DPB vs 18s 3m52s 4 4 v0.23.0 4.1.1-Enterprise_Base 3m52s 4 10m
-spine-01 spine VS-04 DellEMC-S5248f-P-25G-DPB vs 26s 3m59s 3 3 v0.23.0 4.1.1-Enterprise_Base 3m59s 3 10m
-spine-02 spine VS-05 DellEMC-S5248f-P-25G-DPB vs 19s 3m53s 4 4 v0.23.0 4.1.1-Enterprise_Base 3m53s 4 10m
+NAME ROLE DESCR HWSKU HEARTBEAT APPLIED APPLIEDG CURRENTG VERSION SOFTWARE ATTEMPT ATTEMPTG AGE
+leaf-01 server-leaf VS-01 ESLAG 1 DellEMC-S5248f-P-25G-DPB 11s 2m59s 1 1 v0.96.2 4.5.0-Enterprise_Base 2m59s 1 10m
+leaf-02 server-leaf VS-02 ESLAG 1 DellEMC-S5248f-P-25G-DPB 29s 3m17s 1 1 v0.96.2 4.5.0-Enterprise_Base 3m17s 1 10m
+leaf-03 server-leaf VS-03 DellEMC-S5248f-P-25G-DPB 19s 3m7s 1 1 v0.96.2 4.5.0-Enterprise_Base 3m7s 1 10m
+spine-01 spine VS-04 DellEMC-S5248f-P-25G-DPB 28s 3m16s 1 1 v0.96.2 4.5.0-Enterprise_Base 3m16s 1 10m
+spine-02 spine VS-05 DellEMC-S5248f-P-25G-DPB 17s 3m6s 1 1 v0.96.2 4.5.0-Enterprise_Base 3m6s 1 10m
```
The `Heartbeat` column shows how long ago the switch has sent the heartbeat to the control node. The `Applied` column
@@ -524,12 +512,12 @@ For example, to get the list of switches, run:
```console
core@control-1 ~ $ kubectl get switch
-NAME ROLE DESCR GROUPS LOCATIONUUID AGE
-leaf-01 server-leaf VS-01 MCLAG 1 5e2ae08a-8ba9-599a-ae0f-58c17cbbac67 6h10m
-leaf-02 server-leaf VS-02 MCLAG 1 5a310b84-153e-5e1c-ae99-dff9bf1bfc91 6h10m
-leaf-03 server-leaf VS-03 5f5f4ad5-c300-5ae3-9e47-f7898a087969 6h10m
-spine-01 spine VS-04 3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da 6h10m
-spine-02 spine VS-05 96fbd4eb-53b5-5a4c-8d6a-bbc27d883030 6h10m
+NAME PROFILE ROLE DESCR GROUPS AGE
+leaf-01 vs server-leaf VS-01 ESLAG 1 ["eslag-1"] 10m
+leaf-02 vs server-leaf VS-02 ESLAG 1 ["eslag-1"] 10m
+leaf-03 vs server-leaf VS-03 10m
+spine-01 vs spine VS-04 10m
+spine-02 vs spine VS-05 10m
```
Similarly, to get the list of servers, run:
@@ -537,33 +525,34 @@ Similarly, to get the list of servers, run:
```console
core@control-1 ~ $ kubectl get server
NAME TYPE DESCR AGE
-control-1 control Control node 6h10m
-server-01 S-01 MCLAG leaf-01 leaf-02 6h10m
-server-02 S-02 MCLAG leaf-01 leaf-02 6h10m
-server-03 S-03 Unbundled leaf-01 6h10m
-server-04 S-04 Bundled leaf-02 6h10m
-server-05 S-05 Unbundled leaf-03 6h10m
-server-06 S-06 Bundled leaf-03 6h10m
+control-1 control Control node 10m
+server-01 S-01 ESLAG leaf-01 leaf-02 10m
+server-02 S-02 ESLAG leaf-01 leaf-02 10m
+server-03 S-03 Unbundled leaf-01 10m
+server-04 S-04 Bundled leaf-02 10m
+server-05 S-05 Unbundled leaf-03 10m
+server-06 S-06 Bundled leaf-03 10m
```
For connections, use:
```console
core@control-1 ~ $ kubectl get connection
-NAME TYPE AGE
-leaf-01--mclag-domain--leaf-02 mclag-domain 6h11m
-server-01--mclag--leaf-01--leaf-02 mclag 6h11m
-server-02--mclag--leaf-01--leaf-02 mclag 6h11m
-server-03--unbundled--leaf-01 unbundled 6h11m
-server-04--bundled--leaf-02 bundled 6h11m
-server-05--unbundled--leaf-03 unbundled 6h11m
-server-06--bundled--leaf-03 bundled 6h11m
-spine-01--fabric--leaf-01 fabric 6h11m
-spine-01--fabric--leaf-02 fabric 6h11m
-spine-01--fabric--leaf-03 fabric 6h11m
-spine-02--fabric--leaf-01 fabric 6h11m
-spine-02--fabric--leaf-02 fabric 6h11m
-spine-02--fabric--leaf-03 fabric 6h11m
+NAME TYPE AGE
+server-01--eslag--leaf-01--leaf-02 eslag 10m
+server-02--eslag--leaf-01--leaf-02 eslag 10m
+server-03--unbundled--leaf-01 unbundled 10m
+server-04--bundled--leaf-02 bundled 10m
+server-05--unbundled--leaf-03 unbundled 10m
+server-06--bundled--leaf-03 bundled 10m
+spine-01--fabric--leaf-01 fabric 10m
+spine-01--fabric--leaf-02 fabric 10m
+spine-01--fabric--leaf-03 fabric 10m
+spine-01--gateway--gateway-1 gateway 10m
+spine-02--fabric--leaf-01 fabric 10m
+spine-02--fabric--leaf-02 fabric 10m
+spine-02--fabric--leaf-03 fabric 10m
+spine-02--gateway--gateway-1 gateway 10m
```
For IPv4 and VLAN namespaces, use:
@@ -571,11 +560,11 @@ For IPv4 and VLAN namespaces, use:
```console
core@control-1 ~ $ kubectl get ipns
NAME SUBNETS AGE
-default ["10.0.0.0/16"] 6h12m
+default ["10.0.0.0/16"] 10m
core@control-1 ~ $ kubectl get vlanns
NAME AGE
-default 6h12m
+default 10m
```
## Reset VLAB