Posts Tagged ‘10GbE’

Force10 MXL Switch: Stacking

March 3, 2015

Overview

There are two typical scenarios for stacking MXL’s – within the chassis and across the chassis. In both cases it’s recommended to use ring topology. Daisy chaining is also supported, but not desirable because of the lack of redundancy.

In this post I will be describing the more common case, which is intra-stacking. For inter-stacking configuration you can refer to Dell or Force10 documentation.

Cabling

dell_chassis

In my case I have four MXL switches in bays A1, B1, B2, A2. Cabling is simple, you basically daisy chain all switches and then plug the last switch to the first one.

Stack roles and unit numbers

When stack is built each switch is assigned an ID starting 0 and a role in the stack. There are three roles: Master, Standby and Member:

  • Master – is the switch you’ll use for all configuration. If you currently have IPs assigned to all your MXL switches, all of them except for one will be reset and only the Master will be accessible via SSH.
  • Standby – is the switch which takes over if Master switch fails. Master switch IP address is transferred to Standby in a failover scenario and stack continues to be managed via the same IP.
  • Member switch provides port capacity and doesn’t play any additional roles in the stack.

When you plug cables in, assign stack ports and restart the switches, they will go through election process and automatically pick up roles, as well as IDs. There’s an algorithm that assigns stack IDs and roles, which switches follow. But this algorithm has nothing to do with interconnect bay IDs in the chassis or order in which you cable the switches. You end up with pretty much random numbering.

If order matters, then you’ll have to reboot switches one by one in a particular order to have the desired IDs assigned. In that case IDs are assigned sequentially in a controlled fashion.

Stack configuration

If you don’t have any additional 40GbE modules in slots 0 and 1, then you’ll end up with two QSFP+ ports in a built-in module – ports 33 and 37 (refer to my Force10 MXL Switch: Port Numbering post for port numbering details). All you need to do is to designate them as stack ports on all switches, save config and reboot.

# stack-unit 0 stack-group 0
# stack-unit 0 stack-group 1
# copy run start
# reload

By default each switch is unit 0 in its own stack and stack-group is basically just a 40GbE stack port. You can have maximum of six such ports numbered from 0 to 5. To check that stack ports have been enabled run:

# do show system stack-unit 0 stack-group configured

enabled_ports

It could be that your 40GbE ports are in quad 10GbE mode and are not shown. You’ll need to convert them back to 40GbE mode to proceed. To show the list of available ports type in the command below. Switch shows empty expansion slots as stack ports as well (port 0/41 and 0/45), which is a bit confusing.

# show system stack-unit 0 stack-group

port_list

After a reboot, switches will join the stack and get a role and an id. This process is automatic by default. To see if stack ports have come up after a reboot type:

# show system stack-port status

stack_up

Conclusion

In my example I let switches to go through election process and select roles and IDs on their own. If you want to control the assignment process refer to Dell and Force10 documentation for instructions.

Now you may wonder if unit IDs are assigned automatically, how do you know which stack unit corresponds to which chassis bay ID. The hint for that is to show system inventory and map them by the Service Tag ID which is also shown in the Chassis Management Controller:

# show system brief
# show inventory

Advertisements

Force10 MXL Switch: Port Numbering

February 26, 2015

This is a quick cheat sheet fro MXL port numbering schema, which might seem a bit confusing if you see a MXL switch for the first time.

force10_mxl_10-40gbe_dsc0666

Above is the picture of the switches that I’ve worked with. On the right we have a 2-Port 40GbE built-in module. And then there’re two expansion slots – slot 0 in the middle and slot 1 on the left. Each module has 8 ports allocated to it. The reason being that you can have 2-Port 40-GbE QSFP+ modules in each of the slots, which can operate in 8x10GbE mode. You will need QSFP+ to 4xSFP+ breakout cables, but that’s not the most common scenario anyway.

As we have 8 ports per slot, it would look something like this:

mxl-external-port-mappings

This picture is more for switch stacking, but the rightmost section should give you a basic idea. One of the typical MXL configurations is when you have a built-in 40GbE module for stacking and one or two 4-Port SFP+ expansion modules in slots 0 and 1. In that case your port numbers will be: 33 and 37 for 40GbE ports, 41 to 44 in expansion slot 0 and 49 to 52 in expansion slot 1.

11-01-05-hybrid-qsfp-plus4-port-SFP-module

As you can see for QSFP+ module switch breaks 8 ports in two sets of 4 ports and picks the first number in each set for 40GbE ports. And for SFP+ modules it uses consecutive numbers within each slot and then has a 4 port gap.

Port numbering is described in more detail in MXL’s switch configuration guide, which you can use for your reference. But this short note might help someone to quickly knock that off instead of browsing through a 1000 page document.

Also, I’ve seen pictures of MXL switches with a slightly different port numbering: 41 to 48 in slot 0 and 33 to 40 in slot 1. Which seems like a mirrored version of the switch with a built-in module on the opposite side of it. I’m not sure if it’s just an older version of the same switch, but keep in mind that you might actually have the other variation of the MXL in your blade chassis.

Jumbo Frames justified?

March 27, 2012

When it comes to VMware on NetApp, boosting  performance by implementing Jumbo Frames is always taken into consideration. However, it’s not clear if it really has any significant impact on latency and throughput.

Officially VMware doesn’t support Jumbo Frames for NAS and iSCSI. It means that using Jumbo Frames to transfer storage traffic from VMkernel interface to your storage system is the solution which is not tested by VMware, however, it actually works. To use Jumbo Frames you need to activate them throughout the whole communication path: OS, virtual NIC (change to Enchanced vmxnet from E1000), Virtual Switch and VMkernel, physical ethernet switch and storage. It’s a lot of work to do and it’s disruptive at some points, which is not a good idea for production infrastructure. So I decided to take a look at benchmarks, before deciding to spend a great amount of time and effort on it.

VMware and NetApp has a TR-3808-0110 technical report which is called “VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS”. Section 2.2 clearly states that:

  • Using NFS with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to that observed using NFS without jumbo frames and required approximately 6% to 20% fewer ESX CPU resources compared to using NFS without jumbo frames, depending on the test configuration.
  • Using iSCSI with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to slightly lower than that observed using iSCSI without jumbo and required approximately 12% to 20% fewer ESX CPU resources compared to using iSCSI without jumbo frames depending on the test configuration.
Another important statement here is:
  • Due to the smaller request sizes used in the workloads, it was not expected that enabling jumbo frames would improve overall performance.

I believe that 4K and 8K packet sizes are fair in case of virtual infrastructure. Maybe if you move large amounts of data through your virtual machines it will make sense for you, but I feel like it’s not reasonable to implement Jumbo Frames for virual infrastructure in general.

The another report finding is that Jumbo Frames decrease CPU load, but if you use TOE NICs, then no sense once again.

VMware supports jumbo frames with the following NICs: Intel (82546, 82571), Broadcom (5708, 5706, 5709), Netxen (NXB-10GXxR, NXB-10GCX4), and Neterion (Xframe, Xframe II, Xframe E). We use Broadcom NetXtreme II BCM5708 and Intel 82571EB, so Jumbo Frames implementation is not going to be a problem. Maybe I’ll try to test it by myself when I’ll have some free time.

Links I found useful: