Posts Tagged ‘port channel’

Beginner’s Guide to HPE 5000 Series Switches

October 14, 2017

I don’t closely track the popularity of my blog. If what I share helps people in their day to day job, it’s already good enough to me. But I do look at site statistics now and then just out of curiosity and it seems that network-related posts get a lot of popularity. A blog post I wrote a while ago on Dell N4000 switches has quickly got in top five over the last year.

So it seems that there is a demand for entry-level switch configuration guides. I’ve worked with a quite a few different switch brands over the years, so I thought I will build on the success of the Dell blog post and this time write about HPE FlexNetwork/FlexFabric 5000 switch series.

Operating Systems

HPE has several network switch product lines. I won’t even try to cover all of them in this post. But it’s important to know that there are a few different operating systems you can encounter, while working with HPE network switches. There is a familiar ProCurve product portfolio (now merged with Aruba), which is based on ProVision operating system.

HPE FlexNetwork/FlexFabric 5000 series, on the other hand, is based on Comware operating system. It has a different CLI command set and can be a complete surprise if you’ve worked only with ProCurve switches before. So this blog post will be particularly valuable for those who’re dealing with HPE 5000 for the first time.

The following guide has been tested on a pair of HPE FlexFabric 5700-series switches. Even though commands are mostly the same, on other switch series, like FlexNetwork 5800, there might be some minor differences.

Initial Configuration

When the switch is booted for the first time it will start automatic configuration by trying to obtain settings over DHCP, which you can interrupt by Ctrl+C to get straight to CLI.

You start in user view where you can run display commands to review switch settings. To start the configuration, change to system view:

> system-view

Let’s start by configuring remote access to the switch. There are two ways you can do that. You either use the out-of-band management port:

> interface M-GigabitEthernet 0/0/0
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Or you can configure a VLAN interface IP address:

> interface vlan-interface 1
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Then configure switch name, enable SSH, set passwords and you can start managing the switch over SSH:

> sysname switchname

> public-key local create rsa
> ssh server enable
> user-interface vty 0 15
> authentication-mode scheme
> protocol inbound ssh

> super password simple yourpassword
> local-user admin
> password simple yourpassword
> authorization-attribute user-role level-0
> service-type ssh

User “admin” will have an unprivileged role. You will need to run the following command and enter password once logged in, to elevate to network admin rights:

> super

Intelligent Resilient Framework

In small non-business-critical environments one standalone switch is usually sufficient. In larger environments switches are typically deployed in pairs for redundancy. To simplify management and to avoid network loops most switches support some sort of MLAG or stacking. IRF is HPE’s version of it.

Determine what ports you’re going to use for IRF. There are two QSFP+ ports on 5700-series dedicated for it. And then on on the first switch (master) run the following commands (it’s recommended to shut down the ports before you set them up as IRF):

> irf member 1 priority 32
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> shutdown
> irf-port 1/1
> port group interface FortyGigE 1/0/41
> irf-port 1/2
> port group interface FortyGigE 1/0/42
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> undo shut
> save
> irf-port-configuration active

On the second switch (slave) run the following commands to change the IRF ID to 2:

> irf member 1 renumber 2
> reboot

When the switch comes up, configure IRF ports:

> irf member 2 priority 30
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> shutdown
> irf-port 2/1
> port group interface FortyGigE 2/0/41
> irf-port 2/2
> port group interface FortyGigE 2/0/42
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> undo shut
> save
> irf-port-configuration active

Now you can connect the physical IRF ports. IRF is a ring topology, that means (in my case) port 1/0/41 should connect to 2/0/42 and port 1/0/42 should connect to 2/0/41.

Second switch will automatically reboot and if all is configured correctly, you should see both switches join the IRF fabric. Member switch 1 has the highest priority of 32 and becomes the master:

> display irf

Firmware Upgrade

Firmware upgrade is the next logical step after you set up IRF. The latest firmware revision for the switches can be download from HPE web-site. Keep in mind you will need a HPE passport account, with a valid service agreement (SAID) added to it.

You will also need a TFTP server to upgrade the firmware. There are a few of them out there, but the most commonly used is probably Tftpd64.

When you get the TFTP server up and running and copy the firmware file to it, perform an upgrade:

> tftp 10.10.10.20 get 5700-CMW710-R2432P03.ipe
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 1 main
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 2 main
> irf auto-update enable
> reboot

Confirm firmware has been updated:

> display version

VLANs, Aggregation Groups and Tagging

In Comware the term “aggregation group” is used to describe what is a “port channel” in Cisco world. Trunk/access ports are also called tagged/untagged ports throughout the documentation.

In this section we will discuss a few common port configuration scenarios:

  • Untagged ports, which can be your iSCSI storage array ports
  • Tagged ports, such as your VMware host uplinks
  • Aggregation groups, typically used for LAGs to upstream switches

First of all create all VLANs and give them descriptions:

> vlan 10
> description iSCSI
> vlan 20
> description Server
> vlan 30
> description Dev and test

Then specify untagged ports:

> vlan 10
> port te 1/0/1
> port te 2/0/1

To configure tagged ports and allow certain VLANs (ports will be added to the VLANs automatically):

> int te 1/0/2
> description ESX01 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30
> int te 2/0/2
> description ESX02 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30

And to create an LACP aggregation group:

> interface bridge-aggregation 1
> description Trunk to upstream switch
> link-aggregation mode dynamic
> port link-type trunk
> port trunk permit vlan 20 30

> interface te 1/0/3
> port link-aggregation group 1
> interface te 2/0/3
> port link-aggregation group 1

Conclusion

Whether you are a network engineer new to the Comware operating system or a VMware administrator looking for a quick cheat sheet for FlexNetwork/FlexFabric switches, I hope this guide has helped you get the job done.

If this blog post gets the same amount of popularity, maybe it will turn into another series. But for now – over and out.

Advertisements

Dell Force10 Part 1: Initial Configuration

July 3, 2016

force10_S4048_on
When it comes to networking Dell has two main series of switches. PowerConnect/N-series, which run DNOS 6.x operating system. And S/Z-series switches, which run on DNOS 9.x derived from Force10 OS (FTOS). In this series of blogs we will go through the configuration of Force10 switch series and use Dell S4048-ON top of the rack switch as an example.

Interesting to note, that unlike other S-series switches S4048-ON is an Open Networking switch. Dell is one of the first companies which apart from its own OS lets customers run other operating systems on its network switches, such as Cumulus Linux OS and Big Switch Networks Switch Light OS. While Cumulus and Big Switch has its own use cases, in this blog we will look specifically at configuring FTOS.

Boot process

S4048-ON comes from the factory pre-configured for bare metal provisioning (BMP). This is what you will see when you boot the switch for the first time:

s4048_bmp

If you just want to boot FTOS, simply skip the BMP by choosing A and switch will boot the OS.

After some time BMP will time out. If you’ve missed the above wizard, you can also disable BMP from CLI using the following commands:

> enable
# stop bmp
# config
# reload-type normal-reload
# exit
# reload

When prompted choose to save the configuration and proceed with reload. After the switch has rebooted check that the next boot is set to normal reload:

# show reload-type

Initial configuration

First steps of any switch installation is assigning a hostname and management interface settings:

# hostname DELL4048-SWITCH
# int managementethernet 1/1
# ip address 172.10.10.2/24
# no shut
# management route 0.0.0.0/0 172.10.10.10

Then set admin / enable passwords and allow remote management via SSH:

# enable password 123456
# username admin password 123456
# ip ssh server enable

Configure time zone and NTP:

# clock timezone UTC 11
# ntp server 172.10.10.20
# show ntp associations
# show ntp status
# show clock

Firmware upgrade

Force10 switches have two boot banks A: and B:. It’s a good practice to upload new firmware into one boot bank and keep the old firmware in the other in case you need to roll back.

The easiest way to upgrade is via TFTP using Tftpd64, which you can download for free from here. If you’re upgrading an existing switch, make sure to save the running config and make a backup. If it’s an initial install you can skip this step.

# copy run start
# copy start tftp://10.0.0.1/FORCE10_SWITCH_01.01.16.conf

Then upload new firmware to image B:, change active boot bank to B: and reload:

# show version
# show boot system stack-unit 1
# upgrade system tftp://10.0.0.1/FTOS-SK-9.9.0.0P9.bin b:
# conf t
# boot system stack-unit 1 primary system b:
# exit
# reload

You will be prompted to save the configuration and reboot. After the reboot you may be asked to enable SupportAssist. SuppotAssist helps to automatically open Dell service tickets if there is a switch fault. You can enable SupportAssist by running the following commands and answering prompts:

supportassist

# conf t
# support-assist activate
# support-assist activity full-transfer start now
# show support-assist status

My pair of switches were configured in a Virtual Link Trunking (VLT) domain. I’ll explain how VLT works later in the series. But from the upgrade point of view, each switch in a VLT domain is treated as a separate switch and has to be upgraded separately. If you decided to use a stack instead of VLT, you can find the upgrade process for a Force10 stack in my other post about Dell MXL switches here.

Spanning tree

Spanning Tree Protocol (STP) helps to prevent network topology loops and is highly recommended for use in any network. Switches connected in an actual loop topology in today’s networks are rare. But STP can save you from consequences of a potential human error, such as port channel misconfiguration. If instead of creating one port channel with two links, you by mistake create two port channels with one link each and both carry the same VLANs, you’ve accidentally created a loop, which will bring your whole network to an immediate halt.

It’s a good practice to enable STP as a safeguard mechanism from such configuration errors. S4048-ON supports STP, RSTP, MSTP and PVST+. In my case S4048s were uplinked into HP core, which supported STP, RSTP and MSTP. If you have Cisco switches in your network core you can use PVST+. In my case I used RSTP, which is a good choice if you don’t require enhancements of MSTP and PVST+ in your network. Just make sure to not use the basic STP protocol, as it provides the slowest convergence.

# protocol spanning-tree rstp
# no disable
# show spanning-tree rstp

In every STP topology there is also a root switch, which by default is selected automatically. For a more deterministic STP behaviour it’s recommended to select the root switch manually, by assigning the lowest STP priority to it. Typically your core switch should be your root switch. In my case it was a HP core switch, which was assigned priority of “0”.

When configuring server and storage facing ports make sure to enable EdgePort mode to minimize the time it takes for the port to come online:

# int range Te1/45-1/48
# spanning-tree rstp edge-port
# switchport
# no shut

If you want to know more about how STP works, you can read a few of my previous blog posts on STP here and here.

Flow control

To avoid dropped packets on 10Gb switch ports at times of potential heavy utilization it is also a best practice to as a minimum enable bi-directional Flow Control on the storage array ports. I enabled it on the iSCSI links connected from the Dell Compellent storage array:

# int range Te1/17-1/18
# flowcontrol rx on tx on

If you specifically interested in switch best practices for Compellent and EqualLogic storage arrays, Dell has a full list of guides for various switches at communitites wiki here.

Port channels and VLANs

Port channels and VLANs are configured similarly to any other switch, but I include them here in case you want to know the syntax. In this example we have two access ports 1/46 and 1/47 and an uplink to the core configured as port channel 1:

# interface port-channel 1
# switchport
# no shutdown

# interface range Te1/1-1/2
# port-channel-protocol LACP
# port-channel 1 mode active
# no shutdown

# int vlan 254
# untagged Te1/46-1/47
# tagged po 1

Keep in mind, that port channels are used either in one switch configurations or when two or more switches are stacked together. If you’re using Virtual Link Trunking (VLT), you will need to create Virtual Link Trunks (VLTs). Which are similar to port channels, but have a slightly different syntax. We will talk about VLT in much more detail in the following Force10 blogs.

Conclusion

One feature which I didn’t specifically mentioned in this blog post was Jumbo Frames. I tend not to use it in my deployments until I see convincing evidence of it making a difference for iSCSI/NFS storage implementations. I did a post about Jumbo Frames long time ago here and hasn’t changed my opinion ever since. Interested to here your thoughts if have a different take on that.

References

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.

Beginner’s Guide to Dell N4000 Series Switches

January 18, 2016

Dell N-Series switches run on Dell Network Operating System (DNOS) version 6.x. Unlike Dell S-Series switches which run on DNOS 9.x, derived from  Force10 Operation System (FTOS), DNOS 6.x came from the PowerConnect switch series and share the same codebase. So if you’ve ever worked with PowerConnect switches, N-Series syntax should be very familiar.

In my case I had two Dell N4032F switches. But the same set of commands applies to any other N4000 Series switch.

Initial Configuration

When you first turn the switch on, it gives you 60 seconds to enter the wizard, where you can set up network settings for the Out-of-Band (OOB) management interface and change the admin password. If you miss it you can reboot the switch and it will show the same wizard prompt again when it boots up. Or you can set it up from the CLI:

# interface out-of-band
# ip address 10.10.10.10 255.255.255.0 10.10.10.254

# show ip interface out-of-band

Once you get to the CLI prompt, configure hostname and enable SSH:

# hostname n4032f-prod

# crypto key generate rsa
# crypto key generate dsa
# ip ssh server
# ip telnet server disable

Stacking

Dell N4000 Series switches support both stacking and MLAG (Multi-chassis Link Aggregation). One of the drawbacks of the stack configuration is disruptive firmware upgrades. When you update firmware on the stack master, firmware is distributed to all stack members and all switches are rebooted simultaneously.

In MLAG each switch has its own Control Plane and can be rebooted independently. Which is MLAG’s shortcoming at the same time, because unlike stack, where all units act as one switch, in MLAG you have to manage each switch separately.

In my case I chose stacking for its simplicity.

Dell N4000

N4000 switches are stacked using the two 40Gb QSFP ports located at the front. QSFP ports are not configured in stack mode by default. Which you need to change on both switches before you can build a stack:

# stack
# stack-port Fortygigabitethernet 1/1/1 stack
# stack-port Fortygigabitethernet 1/1/2 stack

# show switch stack-ports

Once QSFP ports on both switches are configured, disconnect power from both switches and boot the switch you want to be the stack master first (typically the top switch). When the first switch has fully booted, boot the second switch and check the status. This is what you should see:

# show switch

n4000_stack

Firmware Upgrade

If it’s not a brand new switch, save the config before doing the firmware upgrade:

# copy run start
# copy running-config tftp://10.10.10.100/backup.txt

You can use any TFTP server for the firmware upgrade, such as the free Tftpd64 server.

tftpd64

Then you upload the firmware image to the stack master and reload the stack:

# copy tftp://10.10.10.100/N4000v6.2.7.2.stk backup
# boot system backup
# reload
# show version

Firmware is uploaded to a backup image. Then you select the backup image for the next boot and reload the stack. When both switches reboot you should see something similar to this:

frimware_upgraded

As part of the upgrade process the new firmware is automatically uploaded from the master to all stack members, which is a default behaviour. You can confirm it is enabled using the following command:

# show auto-copy-sw

Flow Control, Jumbo Frames and iSCSI Optimization

In my case I used two N4032F switches for an iSCSI backbone, so I needed to make sure that Flow Control and Jumbo Frames are enabled on the switch.

Flow Control is enabled by default, which you can confirm by the following command:

# show storm-control

To globally enable Jumbo Frames on all ports type:

# system jumbo mtu 9216

# show system mtu

Interestingly, Dell N4000 Series switches also have built-in iSCSI optimization, which can detect iSCSI sessions by snooping the traffic on ports 3260 and 860. It then prioritizes iSCSI traffic over the other types of traffic to guarantee low latency for storage I/O. To show iSCSI settings:

# show iscsi

By default switches only track the sessions. Traffic prioritization is disabled by default and has to be enabled manually. This didn’t matter in my case, as the switches were dedicated for storage traffic. But if you share switches between storage and server traffic, you may want to enable it. Refer to the switch User’s Configuration Guide for details.

If you’re using a Dell Compellent storage array with N4000 switches, also make sure to apply a Compellent profile to the ports where storage array is connected to:

# macro global apply profile-compellent-nas $interface_name te1/0/1
# macro global apply profile-compellent-nas $interface_name te1/0/2
# macro global apply profile-compellent-nas $interface_name te1/0/3
# macro global apply profile-compellent-nas $interface_name te1/0/4

VLANs, Trunks and Port Channels

Again, I didn’t use any VLANs and Trunks, because switches were dedicated for iSCSI traffic and were separate from the LAN core. And I didn’t need Port Channels either, as they are not required for iSCSI.

Your scenario might be different. For instance, if you have vSphere hosts connected to a NetApp array over NFS, you may want to create a Multi-Mode (LACP) VIF on the NetApp side. If that’s the case, to create a port channel on the Multi-Mode VIF ports use the following:

# interface range te1/0/2,te2/0/2
# channel-group 1 mode active
# show intefaces po1

If the switches are used for both storage and VM traffic, then you’ll need to configure the server ports and uplink them to your network core. Create your VLANs first:

# vlan 10,20,30

Configure vSwitch uplinks from the ESXi hosts. In a typical vSphere environment, traffic is tagged on the vSwitch side, which means that server ports should be configured as trunks:

# interface range te1/0/3-6,te2/0/3-6
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30

And finally configure uplinks to the network core. Depending on how your LAN core is set up, you may want to create a port channel to the upstream switch and trunk the required VLANs:

# interface range te1/0/1,te2/0/1
# channel-group 2 mode active
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30
# show intefaces po2

Conclusion

This guide didn’t include information on Spanning Tree, QoS or any of the switch Layer 3 features, but I hope it could get you started. At the end of the day, every environment is different. If you need additional information refer to the following guides from the Dell web-site:

 

Traffic Load Balancing in Cisco UCS

December 21, 2015

Whenever I deploy a Cisco UCS at a customer the question I get asked a lot is how traffic flows within the system between VMs running on the blades and FEX modules, FEX modules and Fabric Interconnects and finally how it’s uplinked to the network core.

Cisco has a range of CNA cards for UCS blades. With VIC 1280 you get 8 x 10Gb ports split between two FEX modules for redundancy. And FEX modules on their own can have up to 8 x 10Gb Fabric Interconnect facing interfaces, which can give you up to 160Gb of bandwidth per chassis. And all these numbers may sound impressive, but unless you understand how your VMs traffic flows through UCS it’s easy to make wrong assumptions on what per VM and aggregate bandwidth you can achieve. So let’s dive deep into UCS and shed some light on how VM traffic is load-balanced within the system.

UCS Hardware Components

Each Fabric Extender (FEX) has external and internal ports. External FEX ports are patched to FIs and internal ports are internally wired to the blade adapters. FEX 2204 has 4 external and 16 internal and FEX 2208 has 8 external and 32 internal ports.

External ports are connected to FIs in powers of two: 1, 2, 4 or 8 ports per FEX and form a port channel (make sure to use “Port Channel” link grouping preference under Chassis/FEX Discovery Policy). Same rule is applied to blade Virtual Interface Cards (VIC). The most common VIC 1240 and 1280 have 4 x 10Gb and 8 x 10Gb ports respectively and also form a port channel to the internal FEX ports. Every VIC adaptor is connected to both FEX modules for redundancy.

chassis_network

Fabric Interconnects are then patched to your network core and FC Fabric (if you have one). Whether Ethernet uplinks will be individual uplinks or port channels will depend on your network topology. For fibre uplinks the rule of thumb is to patch FI A to your FC Fabric A and FI B to FC Fabric B, which follows the common FC traffic isolation principle.

Virtual Circuits

To provide network and storage connectivity to blades you create virtual NICs and virtual HBAs on each blade. Since internally UCS uses FCoE to transfer FC frames, both vNICs and vHBAs use the same 10GbE uplinks to send and receive traffic. Worth mentioning that Cisco uses Data Center Bridging (DCB) protocol with it’s sub-protocols Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS), which guarantee that FC frames have higher priority in the queue and are processed first to ensure low latency. But I digress.

UCS assigns a virtual circuit to each virtual adaptor, which is a representation of how the traffic traverses the system all the way from the VIC port to a FEX internal port, then FEX external port, FI server port and finally a FI uplink. You can trace the full path of each virtual adaptor in UCS Manager by selecting a Service Profile and viewing the VIF Paths tab.

vif_paths

In this example we have a blade with four vNICs and two vHBAs which are split between two fabrics. All virtual adaptors on fabric A are connected through VIC port channel PC-1283 which is represented as port channel PC-1025 on the FEX A side. Then traffic leaves FEX A and reaches the Fabric Interconnect A which sends the traffic out to the network core through port channel A/PC-1.

You can also get the list of port channels from the FI CLI:

# connect nxos
# show port-channel summary

ucs_portchannels

Network Load Balancing

Now that we know how all components are interconnected to each other, let’s discuss the traffic flow in a typical VMware environment and how we achieve the massive network throughput that UCS provides.

As an example let’s take a look at the vSwitch where your VM Network port group is configured. vSwitch will have two uplinks – one goes to Fabric A and the other one to Fabric B for redundancy. Default load balancing policy on a vSwitch is “Route based on the originating port ID”, which essentially pins all traffic for a VM to a particular uplink. vSphere makes sure that VMs are evenly distributed between the uplinks to use all network bandwidth available.

From each uplink (or vNIC in UCS world) traffic is forwarded through an adapter port channel to a FEX, then to a Fabric Interconnect and leaves UCS from a FI uplink. Within UCS traffic is distributed between port channel members using source/destination IP hash algorithm. Which is even more granular and is capable of very efficient traffic distribution between all members of a port channel all the way up to your network core.

ucs_loadbalancing

If you look at the vSwitch you’ll see that with UCS each uplink shows the maximum available bandwidth from vNIC and is not limited to a port channel member speed of 10Gb. Why is this so powerful? Because with UCS you don’t need to slice adapter’s available bandwidth between different types of traffic. Even though you provision multiple vNICs and vHBAs for the vSphere hosts, UCS uses the same port channel links (20Gb in the example below) from the VIC adapter to transfer all traffic and takes care of load balancing for you.

vswitch_uplinks

You may legitimately ask, if UCS uses the same pipe to transfer all data regardless of which vSwitch uplink is being used, then how can I make sure that different types of traffic, such as vMotion, storage, VM traffic, replication, etc, do not compete for the same pipe? First you need to ask yourself if you can saturate that much bandwidth with your workloads. If the answer is yes, then you can use another great feature available in UCS, which is QoS. QoS lets you assign a minimum available bandwidth guarantee on a per vNIC/vHBA basis. But that’s a topic for another blog post.

References

In this post I tried to summarise the logic behind UCS traffic distribution. If you want to dig deeper in UCS network architecture, then there’re a lot of great bloggers out there. I would like to call out the following authors:

 

Force10 MXL: Initial Configuration

March 14, 2015

Continuing a series of posts on how to deal with Force10 MXL switches. This one is about VLANs, port channels, tagging and all the basic stuff. It’s not much different from other vendors like Cisco or HP. At the end of the day it’s the same networking standards.

If you want to match the terminology with Cisco for instance, then what you used to as EtherChannels is Port Channels on Force10. And trunk/access ports from Cisco are called tagged/untagged ports on Force10.

Configure Port Channels

If you are after dynamic LACP port channels (as opposed to static), then they are configured in two steps. First step is to create a port channel itself:

# conf t
# interface port-channel 1
# switchport
# no shutdown

And then you enable LACP on the interfaces you want to add to the port channel. I have a four switch stack and use 0/.., 1/.. type of syntax:

# conf t
# int range te0/51-52 , te1/51-52 , te2/51-52 , te3/51-52
# port-channel-protocol lacp
# port-channel 1 mode active

To check if the port channel has come up use this command. Port channel obviously won’t init if it’s not set up on the other side of the port channel as well.

# show int po1 brief

port_channel

Configure VLANs

Then you create your VLANs and add ports. Typically if you have vSphere hosts connected to the switch, you tag traffic on ESXi host level. So both your host ports and port channel will need to be added to VLANs as tagged. If you have any standalone non-virtualized servers – you’ll use untagged.

# conf t
# interface vlan 120
# description Management
# tagged Te0/1-4
# tagged Te2/1-4
# tagged Po1
# no shutdown
# copy run start

I have four hosts. Each host has a dual-port NIC which connects to two fabrics – switches 0 and 2 in the stack (1 port per fabric). I allow VLAN 120 traffic from these ports through the port channel to the upstream core switch.

You’ll most likely have more than one VLAN. At least one for Management and one for Production if it’s vSphere. But process for the rest is exactly the same.

The other switch

Just to give you a whole picture I’ll include the configuration of the switch on the other side of the trunk. I had a modular HP switch with 10Gb modules. A config for it would look like the following:

# conf t
# trunk I1-I8 trk1 lacp
# vlan 120 tagged trk1
# write mem

I1 to I8 here are ports, where I – is the module and 1 to 8 are ports within that module.