AHV Basics – Part 1 AHV Networking



Often when talking to customers about AHV, they are somewhat concerned that various aspects differ to that of other hypervisors they are familiar with, so I thought I would put some brief posts together explaining the basics, starting with networking.

AHV uses Open vSwitch (OVS) to connect the hypervisor, CVMs, and guest VMs to each other as well as to the physical network. As you would expect, the OVS service runs on each and every AHV node and starts automatically.

What is OVS?

OVS is open source software implemented in the Linux kernel designed to work in virtualisation environments. OVS behaves in the same way as a layer-2 learning switch in that it maintains a MAC address table. The hypervisor host and VMs connect to virtual ports on the switch. OVS supports many popular switch features, including VLAN tagging, Link Aggregation Control Protocol (LACP), port mirroring, and quality of service (QoS). Each AHV server maintains an OVS instance, and all OVS instances combine to form a single logical switch. Constructs called bridges to manage the switch instances residing on the AHV hosts.

What is a Bridge?

A bridge acts as a virtual switch to manage network traffic between physical and virtual network interfaces. The default AHV configuration includes an OVS bridge called br0 and a native Linux bridge called virbr0. The virbr0 Linux bridge carries management and storage communication between the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0 OVS bridge. All AHV hosts, VMs, and physical interfaces use ports for connectivity to the bridge.

What is a port?

A port is a logical construct that is created within a bridge that represents connectivity to a virtual switch. AHV makes use of several port types including internal, tap, VXLAN, and bond.

– Internal Ports have the same as the default bridge (br0) and provide access for the AHV host

– Tap ports are used to connect the virtual NICs that are presented to VMs

– VXLAN ports are used for the IP address management (IPAM) functionality inbuilt to AHV

– Bonded ports are used to provide NIC teaming functionality at the physical layer of each AHV host

What is a Bond

A bond or bonded ports aggregate the physical interfaces on an AHV host. By default, all physical NICs are placed into a default bond called br0-ip in br0. Bonds allow for different load balancing modes, including active-backup, balance-slb, and balance-tcp. LCAP can also be utilised for a bond to negotiate link aggregation with a physical switch. Balance-slb is recommended since this gives you active/active without any bonding required at switches. Balance-slb also balances based on source-mac address and rebalances traffic periodically, the default interval is 10 seconds. You could configure this to 60 seconds in order to avoid an excess of source MAC address hashes between any upstream switches.

The following diagrams are taken from the Nutanix AHV Best Practise Guide and show the difference between the different load balancing types:

Active/Backup

Balance-slb (Active/Active)

Balance-tcp (Active/Active)

Here is a diagram that is taken from the Nutanix AHV Best Practise Guide showing the default state of an AHV OVS.