Posts Tagged ‘vSphere Distributed Switch’

Dell Force10 Part 2: VLT Basics

July 10, 2016

dell-force10Last time I made a blog post on initial configuration of Force10 switches, which you can find here. There I talked about firmware upgrade and basic features, such as STP and Flow Control. In this blog post I would like to touch on such a key feature of Force10 switches as Virtual Link Trunking (VLT).

VLT is Force10’s implementation of Multi-Chassis Link Aggregation Group (MLAG), which is similar to Virtual Port Channels (vPC) on Cisco Nexus switches. The goal of VLT is to let you establish one aggregated link to two physical network switches in a loop-free topology. As opposed to two standalone switches, where this is not possible.

You could say that switch stacking gives you similar capabilities and you would  be right. The issue with stacked switches, though, is that they act as a single switch not only from the data plane point of view, but also from the control plane point of view. The implication of this is that if you need to upgrade a switch stack, you have to reboot both switches at the same time, which brings down your network. If you have an iSCSI or NFS storage array connected to the stack, this may cause trouble, especially in enterprise environments.

With VLT you also have one data plane, but individual control planes. As a result, each switch can be managed and upgraded separately without full network downtime.

VLT Terminology

Virtual Link Trunking uses the following set of terms:

  • VLT peer – one of the two switches participating in VLT (you can have a maximum of two switches in a VLT domain)
  • VLT interconnect (VLTi) – interconnect link between the two switches to synchronize the MAC address tables and other VLT-related data
  • VLT backup link – heartbeat link to send keep alive messages between the two switches, it’s also used to identify switch state if VLTi link fails
  • VLT – this is the name of the feature – Virtual Link Trunking, as well as a VLT link aggregation group – Virtual Link Trunk. We will call aggregated link a VLT LAG to avoid ambiguity.
  • VLT domain – grouping of all of the above

VLT Topology

This’s what a sample VLT domain looks like. S4048-ON switches have six 40Gb QSFP+ ports, two of which we use for a VLT interconnect. It’s recommended to use a static LAG for VLTi.

basic_vlt

Two 1Gb links are used for VLT backup. You can use switch out-of-band management ports for this. Four 10Gb links form a VLT LAG to the upstream core switch.

Use Cases

So where is this actually helpful? Vast majority of today’s environments are virtualized and do not require LAGs. vSphere already uses teaming on vSwitch uplinks for traffic distribution across all network ports by default. There are some use cases in VMware environments, where you can create a LAG to a vSphere Distributed Switch for faster link failure convergence or improved packet switching. Unless you have a really large vSphere environment this is generally not required, but you may use this option later on if required. Read Chris Wahl’s blog post here for more info.

Where VLT is really helpful is in building a loop-free network topology in your datacenter. See, all your vSphere hosts are connected to both Force10 switches for redundancy. Since traffic comes to either of the switches depending on which uplink is being picked on a ESXi host, you have to make sure that VMs on switch 1 are able to communicate to VMs on switch 2. If all you had in your environment were two Force10 switches, you would establish a LAG between the two and be done with it. But if your network topology is a bit larger than this and you have at least a single additional core switch/router in your environment you’d be faced with the following dilemma. How can you ensure efficient traffic switching in your network without creating loops?

stp_loop

You can no longer create a LAG between the two Force10 switches, as it will create a loop. Your only option is to keep switches connected only to the core and not to each other. And by doing that you will cause all traffic from VMs on switch 1 destined to VMs on switch 2 and vise versa to traverse the core.

east_west_traffic

And that’s where VLT comes into play. All east-west traffic between servers is contained within the VLT domain and doesn’t need to traverse the core. As shown above, if we didn’t use VLT, traffic from one switch to another would have to go from switch 1 to core and then back from core to switch 2. In a VLT domain traffic between the switches goes directly form switch 1 to switch 2 using VLTi.

Conclusion

That’s a brief introduction to VLT theory. In the next few posts we will look at how exactly VLT is configured and map theory to practice.

Advertisement

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.