Posts Tagged ‘Unified Computing System’

Traffic Load Balancing in Cisco UCS

December 21, 2015

Whenever I deploy a Cisco UCS at a customer the question I get asked a lot is how traffic flows within the system between VMs running on the blades and FEX modules, FEX modules and Fabric Interconnects and finally how it’s uplinked to the network core.

Cisco has a range of CNA cards for UCS blades. With VIC 1280 you get 8 x 10Gb ports split between two FEX modules for redundancy. And FEX modules on their own can have up to 8 x 10Gb Fabric Interconnect facing interfaces, which can give you up to 160Gb of bandwidth per chassis. And all these numbers may sound impressive, but unless you understand how your VMs traffic flows through UCS it’s easy to make wrong assumptions on what per VM and aggregate bandwidth you can achieve. So let’s dive deep into UCS and shed some light on how VM traffic is load-balanced within the system.

UCS Hardware Components

Each Fabric Extender (FEX) has external and internal ports. External FEX ports are patched to FIs and internal ports are internally wired to the blade adapters. FEX 2204 has 4 external and 16 internal and FEX 2208 has 8 external and 32 internal ports.

External ports are connected to FIs in powers of two: 1, 2, 4 or 8 ports per FEX and form a port channel (make sure to use “Port Channel” link grouping preference under Chassis/FEX Discovery Policy). Same rule is applied to blade Virtual Interface Cards (VIC). The most common VIC 1240 and 1280 have 4 x 10Gb and 8 x 10Gb ports respectively and also form a port channel to the internal FEX ports. Every VIC adaptor is connected to both FEX modules for redundancy.

chassis_network

Fabric Interconnects are then patched to your network core and FC Fabric (if you have one). Whether Ethernet uplinks will be individual uplinks or port channels will depend on your network topology. For fibre uplinks the rule of thumb is to patch FI A to your FC Fabric A and FI B to FC Fabric B, which follows the common FC traffic isolation principle.

Virtual Circuits

To provide network and storage connectivity to blades you create virtual NICs and virtual HBAs on each blade. Since internally UCS uses FCoE to transfer FC frames, both vNICs and vHBAs use the same 10GbE uplinks to send and receive traffic. Worth mentioning that Cisco uses Data Center Bridging (DCB) protocol with it’s sub-protocols Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS), which guarantee that FC frames have higher priority in the queue and are processed first to ensure low latency. But I digress.

UCS assigns a virtual circuit to each virtual adaptor, which is a representation of how the traffic traverses the system all the way from the VIC port to a FEX internal port, then FEX external port, FI server port and finally a FI uplink. You can trace the full path of each virtual adaptor in UCS Manager by selecting a Service Profile and viewing the VIF Paths tab.

vif_paths

In this example we have a blade with four vNICs and two vHBAs which are split between two fabrics. All virtual adaptors on fabric A are connected through VIC port channel PC-1283 which is represented as port channel PC-1025 on the FEX A side. Then traffic leaves FEX A and reaches the Fabric Interconnect A which sends the traffic out to the network core through port channel A/PC-1.

You can also get the list of port channels from the FI CLI:

# connect nxos
# show port-channel summary

ucs_portchannels

Network Load Balancing

Now that we know how all components are interconnected to each other, let’s discuss the traffic flow in a typical VMware environment and how we achieve the massive network throughput that UCS provides.

As an example let’s take a look at the vSwitch where your VM Network port group is configured. vSwitch will have two uplinks – one goes to Fabric A and the other one to Fabric B for redundancy. Default load balancing policy on a vSwitch is “Route based on the originating port ID”, which essentially pins all traffic for a VM to a particular uplink. vSphere makes sure that VMs are evenly distributed between the uplinks to use all network bandwidth available.

From each uplink (or vNIC in UCS world) traffic is forwarded through an adapter port channel to a FEX, then to a Fabric Interconnect and leaves UCS from a FI uplink. Within UCS traffic is distributed between port channel members using source/destination IP hash algorithm. Which is even more granular and is capable of very efficient traffic distribution between all members of a port channel all the way up to your network core.

ucs_loadbalancing

If you look at the vSwitch you’ll see that with UCS each uplink shows the maximum available bandwidth from vNIC and is not limited to a port channel member speed of 10Gb. Why is this so powerful? Because with UCS you don’t need to slice adapter’s available bandwidth between different types of traffic. Even though you provision multiple vNICs and vHBAs for the vSphere hosts, UCS uses the same port channel links (20Gb in the example below) from the VIC adapter to transfer all traffic and takes care of load balancing for you.

vswitch_uplinks

You may legitimately ask, if UCS uses the same pipe to transfer all data regardless of which vSwitch uplink is being used, then how can I make sure that different types of traffic, such as vMotion, storage, VM traffic, replication, etc, do not compete for the same pipe? First you need to ask yourself if you can saturate that much bandwidth with your workloads. If the answer is yes, then you can use another great feature available in UCS, which is QoS. QoS lets you assign a minimum available bandwidth guarantee on a per vNIC/vHBA basis. But that’s a topic for another blog post.

References

In this post I tried to summarise the logic behind UCS traffic distribution. If you want to dig deeper in UCS network architecture, then there’re a lot of great bloggers out there. I would like to call out the following authors:

 

Advertisement

Troubleshooting Cisco UCS LDAP

December 4, 2015

If you ever configured LDAP integration on a blade chassis or a storage array, you know that troubleshooting authentication is painful on these things. It will accept all your configuration settings and if you’ve made a mistake somewhere all you get when you try to log in is “Authentication Error” message with no clue of what the actual error is.

Committing configuration changes

There three common places where you can make a mistake when setting up LDAP authentication on UCS. Number one is committing configuration changes to the Fabric Interconnects in UCS Manager.

There are four configuration options which you need to set to enable Active Directory authentication to your domain:

  • LDAP Providers – these are your domain controllers
  • LDAP Provider Groups – are used to group multiple domain controllers of the same domain
  • LDAP Group Maps – where you give permissions to your AD groups and users
  • Authentication Domains – final configuration step where you enable authentication via the domain

Now if you decide to delete a LDAP Provider Group which is configured under an Authentication Domain in attempt to change the settings, this may become an issue.

What is confusing here is UCS Manager will let you delete the LDAP Provider Group, save the changes and LDAP Provider Group will disappear from the list. And you may legitimately conclude that it’s deleted from UCS, but it’s actually not. This is what you’ll see in UCS Manager logs:

[FSM:STAGE:STALE-FAIL]: external aaa server configuration to primary(FSM-STAGE:sam:dme:AaaEpUpdateEp:SetEpLocal)
[FSM:STAGE:REMOTE-ERROR]: Result: resource-unavailable Code: ERR-ep-set-error Message: Re-ordering/Deletion of Providers cannot be applied while ldap is used for yourdomain.com(Domain) authentication(sam:dme:AaaEpUpdateEp:SetEpLocal)

The record will stay on the UCS and you may encounter very confusing issues where you change your LDAP Provider settings but changes are not reflected on UCS. So make sure to delete the object from the higher level entity first.

Distinguished Name typos

There are two ways to group Active Directory entities on a domain controller – Security Groups and Organizational Units. When configuring your AD bind account in LDAP Providers section and setting up permissions in LDAP Group Maps, make sure to not confuse the two. The best advice I can give – always use ADSI Edit tool to find the exact DN. Why? As an example let’s say you want to give permissions to the builtin administrator group and you use the following DN:

CN=Administrators,OU=Builtin,DC=yourdomain,DC=com

This won’t work, because even though Builtin container may look like a OU, it’s actually a CN in AD, as well as Users and Computers containers.

adsi_edit.JPG

ADSI Edit will give you the exact Distinguished Name. Make sure to use it to save yourself the hassle.

Group Authorization settings

Last but not least are the following two LDAP Provider configuration settings:

  • Group Authorization – whether UCS searches within groups when authenticating
  • Group Recursion – whether UCS searches groups recursively

If you add an AD group which the user is a part of in LDAP Group Maps and do not enable Group Authorization, UCS simply won’t search within the group. Enable this option unless you give permissions only on a per user basis.

Second option enables recursive search within AD groups. If you have nested groups in AD (which most people have) enable recursive search or UCS won’t look deeper than 1 level.

If you get really stuck

If you’ve set all the settings up and are certain they the are correct, but authentication still doesn’t work, then there is a relatively easy way to localize the issue.

First step is to check whether UCS can bind to your LDAP Providers and authenticate users. Pick a user (LDAP Group Maps don’t matter at this point), SSH to a Fabric Interconnect and type the following:

ucs # connect nxos
ucs(nxos)# test aaa server ldap yourdc.yourdomain.com john password123

yourdc.yourdomain.com – is the domain controller you’ve configured in LDAP Providers section. If authentication doesn’t work, then the issue is in LDAP Provider settings.

If you can authenticate, then the next step is to make sure that UCS searches through the right AD groups. To check that you will need to enable LDAP authentication logging on a Fabric Interconnect:

ucs # connect nxos
ucs(nxos)# debug ldap aaa-request-lowlevel

Now try to authenticate and look through the list of groups which UCS is searching through. If you can’t see the group which your user is a part of, then you most likely using a wrong DN in LDAP Group Maps.

In my case the settings are configured correctly and I can see that UCS is searching in the Builtin Administrators group:

2015 Dec 1 14:12:19.581737 ldap: value: CN=Enterprise Admins,CN=Users,DC=yourdomain,DC=com
2015 Dec 1 14:12:19.581747 ldap: ldap_add_to_groups: Discarding. group map not configured for CN=Enterprise Admins,CN=Users,DC=yourdomain,DC=com
2015 Dec 1 14:12:19.581756 ldap: value: CN=Administrators,CN=Builtin,DC=yourdomain,DC=com
2015 Dec 1 14:12:19.581767 ldap: ldap_add_to_groups: successfully added group:CN=Administrators,CN=Builtin,DC=yourdomain,DC=com
2015 Dec 1 14:12:19.581777 ldap: value: CN=Exchange Organization Administrators,OU=Microsoft Exchange Security Groups,DC=yourdomain,DC=com

Make sure to disable logging when you’re done:

ucs(nxos)# undebug all

References: