Posts Tagged ‘trunk’

Beginner’s Guide to HPE 5000 Series Switches

October 14, 2017

I don’t closely track the popularity of my blog. If what I share helps people in their day to day job, it’s already good enough to me. But I do look at site statistics now and then just out of curiosity and it seems that network-related posts get a lot of popularity. A blog post I wrote a while ago on Dell N4000 switches has quickly got in top five over the last year.

So it seems that there is a demand for entry-level switch configuration guides. I’ve worked with a quite a few different switch brands over the years, so I thought I will build on the success of the Dell blog post and this time write about HPE FlexNetwork/FlexFabric 5000 switch series.

Operating Systems

HPE has several network switch product lines. I won’t even try to cover all of them in this post. But it’s important to know that there are a few different operating systems you can encounter, while working with HPE network switches. There is a familiar ProCurve product portfolio (now merged with Aruba), which is based on ProVision operating system.

HPE FlexNetwork/FlexFabric 5000 series, on the other hand, is based on Comware operating system. It has a different CLI command set and can be a complete surprise if you’ve worked only with ProCurve switches before. So this blog post will be particularly valuable for those who’re dealing with HPE 5000 for the first time.

The following guide has been tested on a pair of HPE FlexFabric 5700-series switches. Even though commands are mostly the same, on other switch series, like FlexNetwork 5800, there might be some minor differences.

Initial Configuration

When the switch is booted for the first time it will start automatic configuration by trying to obtain settings over DHCP, which you can interrupt by Ctrl+C to get straight to CLI.

You start in user view where you can run display commands to review switch settings. To start the configuration, change to system view:

> system-view

Let’s start by configuring remote access to the switch. There are two ways you can do that. You either use the out-of-band management port:

> interface M-GigabitEthernet 0/0/0
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Or you can configure a VLAN interface IP address:

> interface vlan-interface 1
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Then configure switch name, enable SSH, set passwords and you can start managing the switch over SSH:

> sysname switchname

> public-key local create rsa
> ssh server enable
> user-interface vty 0 15
> authentication-mode scheme
> protocol inbound ssh

> super password simple yourpassword
> local-user admin
> password simple yourpassword
> authorization-attribute user-role level-0
> service-type ssh

User “admin” will have an unprivileged role. You will need to run the following command and enter password once logged in, to elevate to network admin rights:

> super

Intelligent Resilient Framework

In small non-business-critical environments one standalone switch is usually sufficient. In larger environments switches are typically deployed in pairs for redundancy. To simplify management and to avoid network loops most switches support some sort of MLAG or stacking. IRF is HPE’s version of it.

Determine what ports you’re going to use for IRF. There are two QSFP+ ports on 5700-series dedicated for it. And then on on the first switch (master) run the following commands (it’s recommended to shut down the ports before you set them up as IRF):

> irf member 1 priority 32
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> shutdown
> irf-port 1/1
> port group interface FortyGigE 1/0/41
> irf-port 1/2
> port group interface FortyGigE 1/0/42
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> undo shut
> save
> irf-port-configuration active

On the second switch (slave) run the following commands to change the IRF ID to 2:

> irf member 1 renumber 2
> reboot

When the switch comes up, configure IRF ports:

> irf member 2 priority 30
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> shutdown
> irf-port 2/1
> port group interface FortyGigE 2/0/41
> irf-port 2/2
> port group interface FortyGigE 2/0/42
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> undo shut
> save
> irf-port-configuration active

Now you can connect the physical IRF ports. IRF is a ring topology, that means (in my case) port 1/0/41 should connect to 2/0/42 and port 1/0/42 should connect to 2/0/41.

Second switch will automatically reboot and if all is configured correctly, you should see both switches join the IRF fabric. Member switch 1 has the highest priority of 32 and becomes the master:

> display irf

Firmware Upgrade

Firmware upgrade is the next logical step after you set up IRF. The latest firmware revision for the switches can be download from HPE web-site. Keep in mind you will need a HPE passport account, with a valid service agreement (SAID) added to it.

You will also need a TFTP server to upgrade the firmware. There are a few of them out there, but the most commonly used is probably Tftpd64.

When you get the TFTP server up and running and copy the firmware file to it, perform an upgrade:

> tftp 10.10.10.20 get 5700-CMW710-R2432P03.ipe
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 1 main
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 2 main
> irf auto-update enable
> reboot

Confirm firmware has been updated:

> display version

VLANs, Aggregation Groups and Tagging

In Comware the term “aggregation group” is used to describe what is a “port channel” in Cisco world. Trunk/access ports are also called tagged/untagged ports throughout the documentation.

In this section we will discuss a few common port configuration scenarios:

  • Untagged ports, which can be your iSCSI storage array ports
  • Tagged ports, such as your VMware host uplinks
  • Aggregation groups, typically used for LAGs to upstream switches

First of all create all VLANs and give them descriptions:

> vlan 10
> description iSCSI
> vlan 20
> description Server
> vlan 30
> description Dev and test

Then specify untagged ports:

> vlan 10
> port te 1/0/1
> port te 2/0/1

To configure tagged ports and allow certain VLANs (ports will be added to the VLANs automatically):

> int te 1/0/2
> description ESX01 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30
> int te 2/0/2
> description ESX02 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30

And to create an LACP aggregation group:

> interface bridge-aggregation 1
> description Trunk to upstream switch
> link-aggregation mode dynamic
> port link-type trunk
> port trunk permit vlan 20 30

> interface te 1/0/3
> port link-aggregation group 1
> interface te 2/0/3
> port link-aggregation group 1

Common Commands

Other useful commands that don’t fall under any specific category, but handy to know.

Display switch configuration:

> display current-configuration

Save switch configuration:

> save

Shut down a port:

> int te 1/0/27
> shutdown

Undo a command:

> undo shutdown

Conclusion

Whether you are a network engineer new to the Comware operating system or a VMware administrator looking for a quick cheat sheet for FlexNetwork/FlexFabric switches, I hope this guide has helped you get the job done.

If this blog post gets the same amount of popularity, maybe it will turn into another series. But for now – over and out.

Advertisement

Dell Force10 Part 2: VLT Basics

July 10, 2016

dell-force10Last time I made a blog post on initial configuration of Force10 switches, which you can find here. There I talked about firmware upgrade and basic features, such as STP and Flow Control. In this blog post I would like to touch on such a key feature of Force10 switches as Virtual Link Trunking (VLT).

VLT is Force10’s implementation of Multi-Chassis Link Aggregation Group (MLAG), which is similar to Virtual Port Channels (vPC) on Cisco Nexus switches. The goal of VLT is to let you establish one aggregated link to two physical network switches in a loop-free topology. As opposed to two standalone switches, where this is not possible.

You could say that switch stacking gives you similar capabilities and you would  be right. The issue with stacked switches, though, is that they act as a single switch not only from the data plane point of view, but also from the control plane point of view. The implication of this is that if you need to upgrade a switch stack, you have to reboot both switches at the same time, which brings down your network. If you have an iSCSI or NFS storage array connected to the stack, this may cause trouble, especially in enterprise environments.

With VLT you also have one data plane, but individual control planes. As a result, each switch can be managed and upgraded separately without full network downtime.

VLT Terminology

Virtual Link Trunking uses the following set of terms:

  • VLT peer – one of the two switches participating in VLT (you can have a maximum of two switches in a VLT domain)
  • VLT interconnect (VLTi) – interconnect link between the two switches to synchronize the MAC address tables and other VLT-related data
  • VLT backup link – heartbeat link to send keep alive messages between the two switches, it’s also used to identify switch state if VLTi link fails
  • VLT – this is the name of the feature – Virtual Link Trunking, as well as a VLT link aggregation group – Virtual Link Trunk. We will call aggregated link a VLT LAG to avoid ambiguity.
  • VLT domain – grouping of all of the above

VLT Topology

This’s what a sample VLT domain looks like. S4048-ON switches have six 40Gb QSFP+ ports, two of which we use for a VLT interconnect. It’s recommended to use a static LAG for VLTi.

basic_vlt

Two 1Gb links are used for VLT backup. You can use switch out-of-band management ports for this. Four 10Gb links form a VLT LAG to the upstream core switch.

Use Cases

So where is this actually helpful? Vast majority of today’s environments are virtualized and do not require LAGs. vSphere already uses teaming on vSwitch uplinks for traffic distribution across all network ports by default. There are some use cases in VMware environments, where you can create a LAG to a vSphere Distributed Switch for faster link failure convergence or improved packet switching. Unless you have a really large vSphere environment this is generally not required, but you may use this option later on if required. Read Chris Wahl’s blog post here for more info.

Where VLT is really helpful is in building a loop-free network topology in your datacenter. See, all your vSphere hosts are connected to both Force10 switches for redundancy. Since traffic comes to either of the switches depending on which uplink is being picked on a ESXi host, you have to make sure that VMs on switch 1 are able to communicate to VMs on switch 2. If all you had in your environment were two Force10 switches, you would establish a LAG between the two and be done with it. But if your network topology is a bit larger than this and you have at least a single additional core switch/router in your environment you’d be faced with the following dilemma. How can you ensure efficient traffic switching in your network without creating loops?

stp_loop

You can no longer create a LAG between the two Force10 switches, as it will create a loop. Your only option is to keep switches connected only to the core and not to each other. And by doing that you will cause all traffic from VMs on switch 1 destined to VMs on switch 2 and vise versa to traverse the core.

east_west_traffic

And that’s where VLT comes into play. All east-west traffic between servers is contained within the VLT domain and doesn’t need to traverse the core. As shown above, if we didn’t use VLT, traffic from one switch to another would have to go from switch 1 to core and then back from core to switch 2. In a VLT domain traffic between the switches goes directly form switch 1 to switch 2 using VLTi.

Conclusion

That’s a brief introduction to VLT theory. In the next few posts we will look at how exactly VLT is configured and map theory to practice.

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.

Beginner’s Guide to Dell N4000 Series Switches

January 18, 2016

Dell N-Series switches run on Dell Network Operating System (DNOS) version 6.x. Unlike Dell S-Series switches which run on DNOS 9.x, derived from  Force10 Operation System (FTOS), DNOS 6.x came from the PowerConnect switch series and share the same codebase. So if you’ve ever worked with PowerConnect switches, N-Series syntax should be very familiar.

In my case I had two Dell N4032F switches. But the same set of commands applies to any other N4000 Series switch.

Initial Configuration

When you first turn the switch on, it gives you 60 seconds to enter the wizard, where you can set up network settings for the Out-of-Band (OOB) management interface and change the admin password. If you miss it you can reboot the switch and it will show the same wizard prompt again when it boots up. Or you can set it up from the CLI:

# interface out-of-band
# ip address 10.10.10.10 255.255.255.0 10.10.10.254

# show ip interface out-of-band

Once you get to the CLI prompt, configure hostname and enable SSH:

# hostname n4032f-prod

# crypto key generate rsa
# crypto key generate dsa
# ip ssh server
# ip telnet server disable

Stacking

Dell N4000 Series switches support both stacking and MLAG (Multi-chassis Link Aggregation). One of the drawbacks of the stack configuration is disruptive firmware upgrades. When you update firmware on the stack master, firmware is distributed to all stack members and all switches are rebooted simultaneously.

In MLAG each switch has its own Control Plane and can be rebooted independently. Which is MLAG’s shortcoming at the same time, because unlike stack, where all units act as one switch, in MLAG you have to manage each switch separately.

In my case I chose stacking for its simplicity.

Dell N4000

N4000 switches are stacked using the two 40Gb QSFP ports located at the front. QSFP ports are not configured in stack mode by default. Which you need to change on both switches before you can build a stack:

# stack
# stack-port Fortygigabitethernet 1/1/1 stack
# stack-port Fortygigabitethernet 1/1/2 stack

# show switch stack-ports

Once QSFP ports on both switches are configured, disconnect power from both switches and boot the switch you want to be the stack master first (typically the top switch). When the first switch has fully booted, boot the second switch and check the status. This is what you should see:

# show switch

n4000_stack

Firmware Upgrade

If it’s not a brand new switch, save the config before doing the firmware upgrade:

# copy run start
# copy running-config tftp://10.10.10.100/backup.txt

You can use any TFTP server for the firmware upgrade, such as the free Tftpd64 server.

tftpd64

Then you upload the firmware image to the stack master and reload the stack:

# copy tftp://10.10.10.100/N4000v6.2.7.2.stk backup
# boot system backup
# reload
# show version

Firmware is uploaded to a backup image. Then you select the backup image for the next boot and reload the stack. When both switches reboot you should see something similar to this:

frimware_upgraded

As part of the upgrade process the new firmware is automatically uploaded from the master to all stack members, which is a default behaviour. You can confirm it is enabled using the following command:

# show auto-copy-sw

Flow Control, Jumbo Frames and iSCSI Optimization

In my case I used two N4032F switches for an iSCSI backbone, so I needed to make sure that Flow Control and Jumbo Frames are enabled on the switch.

Flow Control is enabled by default, which you can confirm by the following command:

# show storm-control

To globally enable Jumbo Frames on all ports type:

# system jumbo mtu 9216

# show system mtu

Interestingly, Dell N4000 Series switches also have built-in iSCSI optimization, which can detect iSCSI sessions by snooping the traffic on ports 3260 and 860. It then prioritizes iSCSI traffic over the other types of traffic to guarantee low latency for storage I/O. To show iSCSI settings:

# show iscsi

By default switches only track the sessions. Traffic prioritization is disabled by default and has to be enabled manually. This didn’t matter in my case, as the switches were dedicated for storage traffic. But if you share switches between storage and server traffic, you may want to enable it. Refer to the switch User’s Configuration Guide for details.

If you’re using a Dell Compellent storage array with N4000 switches, also make sure to apply a Compellent profile to the ports where storage array is connected to:

# macro global apply profile-compellent-nas $interface_name te1/0/1
# macro global apply profile-compellent-nas $interface_name te1/0/2
# macro global apply profile-compellent-nas $interface_name te1/0/3
# macro global apply profile-compellent-nas $interface_name te1/0/4

VLANs, Trunks and Port Channels

Again, I didn’t use any VLANs and Trunks, because switches were dedicated for iSCSI traffic and were separate from the LAN core. And I didn’t need Port Channels either, as they are not required for iSCSI.

Your scenario might be different. For instance, if you have vSphere hosts connected to a NetApp array over NFS, you may want to create a Multi-Mode (LACP) VIF on the NetApp side. If that’s the case, to create a port channel on the Multi-Mode VIF ports use the following:

# interface range te1/0/2,te2/0/2
# channel-group 1 mode active
# show intefaces po1

If the switches are used for both storage and VM traffic, then you’ll need to configure the server ports and uplink them to your network core. Create your VLANs first:

# vlan 10,20,30

Configure vSwitch uplinks from the ESXi hosts. In a typical vSphere environment, traffic is tagged on the vSwitch side, which means that server ports should be configured as trunks:

# interface range te1/0/3-6,te2/0/3-6
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30

And finally configure uplinks to the network core. Depending on how your LAN core is set up, you may want to create a port channel to the upstream switch and trunk the required VLANs:

# interface range te1/0/1,te2/0/1
# channel-group 2 mode active
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30
# show intefaces po2

Conclusion

This guide didn’t include information on Spanning Tree, QoS or any of the switch Layer 3 features, but I hope it could get you started. At the end of the day, every environment is different. If you need additional information refer to the following guides from the Dell web-site:

 

Force10 MXL: Initial Configuration

March 14, 2015

Continuing a series of posts on how to deal with Force10 MXL switches. This one is about VLANs, port channels, tagging and all the basic stuff. It’s not much different from other vendors like Cisco or HP. At the end of the day it’s the same networking standards.

If you want to match the terminology with Cisco for instance, then what you used to as EtherChannels is Port Channels on Force10. And trunk/access ports from Cisco are called tagged/untagged ports on Force10.

Configure Port Channels

If you are after dynamic LACP port channels (as opposed to static), then they are configured in two steps. First step is to create a port channel itself:

# conf t
# interface port-channel 1
# switchport
# no shutdown

And then you enable LACP on the interfaces you want to add to the port channel. I have a four switch stack and use 0/.., 1/.. type of syntax:

# conf t
# int range te0/51-52 , te1/51-52 , te2/51-52 , te3/51-52
# port-channel-protocol lacp
# port-channel 1 mode active

To check if the port channel has come up use this command. Port channel obviously won’t init if it’s not set up on the other side of the port channel as well.

# show int po1 brief

port_channel

Configure VLANs

Then you create your VLANs and add ports. Typically if you have vSphere hosts connected to the switch, you tag traffic on ESXi host level. So both your host ports and port channel will need to be added to VLANs as tagged. If you have any standalone non-virtualized servers – you’ll use untagged.

# conf t
# interface vlan 120
# description Management
# tagged Te0/1-4
# tagged Te2/1-4
# tagged Po1
# no shutdown
# copy run start

I have four hosts. Each host has a dual-port NIC which connects to two fabrics – switches 0 and 2 in the stack (1 port per fabric). I allow VLAN 120 traffic from these ports through the port channel to the upstream core switch.

You’ll most likely have more than one VLAN. At least one for Management and one for Production if it’s vSphere. But process for the rest is exactly the same.

The other switch

Just to give you a whole picture I’ll include the configuration of the switch on the other side of the trunk. I had a modular HP switch with 10Gb modules. A config for it would look like the following:

# conf t
# trunk I1-I8 trk1 lacp
# vlan 120 tagged trk1
# write mem

I1 to I8 here are ports, where I – is the module and 1 to 8 are ports within that module.

VLANs, trunking and VTP

July 3, 2012

Virtual LANs

If you would think of pure Level 2 switch then all hosts connected to it are considered as a single LAN, even though they might be in several different LANs. It means that when a broadcast frame (or frame to a host with an unknown MAC) comes in, it is flooded to all ports. It’s insecure and it can overwhelm mid-size to large networks. And this is the reason why concept of VLANs, as well as IEEE 802.1Q, ISL and VTP protocols were developed. VLAN segments Ethernet traffic to a number of particular ports. In almost all cases VLAN consists of hosts from one network. To create VLAN you run:

configure terminal interface range Fastethernet 0/15 – 16 switchport access vlan 2

Since VLAN 2 doesn’t exist it is created. Ports 15 and 16 are included in the VLAN 2. VLAN 1 is a default VLAN where all ports initially are and is reserved.

VLAN Trunking

Now lets consider situation when you have hosts from one network connected to two switches. It’s rare, but possible. For example you have a network with 100 Mbit devices (tape library, UPS NMC) and 1000 Mbit devices (storage, servers) and you don’t want to waste 1000 ports on 100 devices and connect them to a second 100 Mbit switch. Now when host from one switch sends packet to the unknown host from another switch (or send a broadcast frame) and packet is flooded, switch on the other side needs to know what VLAN it goes from. Otherwise, switch has to discard it, since it floods frames only inside VLANs and VLAN ID is unknown in this case. Here you need to configure the link between the switches as trunk. It means that before sending the packet switch will mark it with VLAN ID and the other switch will forward it only to ports from this VLAN. There are two VLAN trunking protocols: proprietary Cisco ISL (outdated) and IEEE 802.1Q (most used). By default Cisco switches are configured to negotiate to use trunking if asked to do so. But you need to configure switch from either side to initiate negotiating:

configure terminal interface gigabit 0/1 switchport mode dynamic desirable

Rationale behind trunking

Networks splitted between switches is not that frequent case. Say, you want to use VLANs for security and/or efficiency reasons but each particular network is bounded to one switch.  All broadcast and unicast traffic to hosts within the same network do not travel outside the switch where it is connected. And unicast traffic to other networks can travel right to the router (according to basic routing rules) and from the router down to the particular host. Corresponding port where destination host is connected can be identified using destination MAC. It seems that nobody needs to know VLAN IDs in this case. And the question is: “Do you need trunking here?”. And the answer is – yes.

It’s worth starting by saying that ports on Cisco switch can be either access – where end hosts are connected and trunk – links between switches or routers. So when packet travel through trunk port it’s marked using tag by design. There are several reasons behind that. The most simple answer to this question is ARP requests. When router receives packet to route to another network it first needs to know MAC of the destination host. To find it out, router sends ARP request which is a broadcast packet. If there is no VLAN tag on this ARP request it would have to be flooded on all ports on all switches along the path to the destination. And it would break VLAN concept in its core – broadcast traffic has to be limited to the particular VLAN.

Another reason for marking each packet with VLAN ID is efficiency. When switch receives packet and looks up for destination in its MAC address table it’s faster to find MAC, when MAC addresses are grouped by VLAN ID. Switch doesn’t need to look through all MACs, but only those which are in the same VLAN.

In fact, there are many other reasons for using VLAN tags by default. I gave two, which answer the question without digging into details.

VLAN Trunking Protocol

There is an another Cisco proprietary feature called VTP. VTP exchanges information about VLAN IDs and names. It means you configure particular VLAN once on one switch and then all switches will pull this information from it. Not frequently used feature, so I won’t describe it in detail.

Random DC pictures

January 19, 2012

Several pictures of server room hardware with no particular topic.

Click pictures to enlarge.

10kVA APC UPS.

UPS’s Network Management Card (NMC) (with temperature sensor) connected to LAN.

Here you can see battery extenders (white plugs). They allow UPS to support 5kVA of load for 30 mins.

Two Dell PowerEdge 1950 server with 8 cores and 16GB RAM each configured as VMware High Availability (HA) cluster.

Each server has 3 virtual LANs. Each virtual LAN has its own NIC which in its turn has multi-path connection to Cisco switch by two cables, 6 cables in total.

Two Cisco switches which maintain LAN connections for NetApp filers, Dell servers, Sun tape library and APC NMC card. Two switches are tied together by optic cable. Uplink is a 2Gb/s trunk.

HP rack with 9 HP ProLiant servers, HP autoloader and MSA 1500 storage.

HP autoloader with 8 cartridges.

HP MSA 1500 storage which is completely FC.

Hellova cables.