Posts Tagged ‘VLAN’

Beginner’s Guide to HPE 5000 Series Switches

October 14, 2017

I don’t closely track the popularity of my blog. If what I share helps people in their day to day job, it’s already good enough to me. But I do look at site statistics now and then just out of curiosity and it seems that network-related posts get a lot of popularity. A blog post I wrote a while ago on Dell N4000 switches has quickly got in top five over the last year.

So it seems that there is a demand for entry-level switch configuration guides. I’ve worked with a quite a few different switch brands over the years, so I thought I will build on the success of the Dell blog post and this time write about HPE FlexNetwork/FlexFabric 5000 switch series.

Operating Systems

HPE has several network switch product lines. I won’t even try to cover all of them in this post. But it’s important to know that there are a few different operating systems you can encounter, while working with HPE network switches. There is a familiar ProCurve product portfolio (now merged with Aruba), which is based on ProVision operating system.

HPE FlexNetwork/FlexFabric 5000 series, on the other hand, is based on Comware operating system. It has a different CLI command set and can be a complete surprise if you’ve worked only with ProCurve switches before. So this blog post will be particularly valuable for those who’re dealing with HPE 5000 for the first time.

The following guide has been tested on a pair of HPE FlexFabric 5700-series switches. Even though commands are mostly the same, on other switch series, like FlexNetwork 5800, there might be some minor differences.

Initial Configuration

When the switch is booted for the first time it will start automatic configuration by trying to obtain settings over DHCP, which you can interrupt by Ctrl+C to get straight to CLI.

You start in user view where you can run display commands to review switch settings. To start the configuration, change to system view:

> system-view

Let’s start by configuring remote access to the switch. There are two ways you can do that. You either use the out-of-band management port:

> interface M-GigabitEthernet 0/0/0
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Or you can configure a VLAN interface IP address:

> interface vlan-interface 1
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Then configure switch name, enable SSH, set passwords and you can start managing the switch over SSH:

> sysname switchname

> public-key local create rsa
> ssh server enable
> user-interface vty 0 15
> authentication-mode scheme
> protocol inbound ssh

> super password simple yourpassword
> local-user admin
> password simple yourpassword
> authorization-attribute user-role level-0
> service-type ssh

User “admin” will have an unprivileged role. You will need to run the following command and enter password once logged in, to elevate to network admin rights:

> super

Intelligent Resilient Framework

In small non-business-critical environments one standalone switch is usually sufficient. In larger environments switches are typically deployed in pairs for redundancy. To simplify management and to avoid network loops most switches support some sort of MLAG or stacking. IRF is HPE’s version of it.

Determine what ports you’re going to use for IRF. There are two QSFP+ ports on 5700-series dedicated for it. And then on on the first switch (master) run the following commands (it’s recommended to shut down the ports before you set them up as IRF):

> irf member 1 priority 32
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> shutdown
> irf-port 1/1
> port group interface FortyGigE 1/0/41
> irf-port 1/2
> port group interface FortyGigE 1/0/42
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> undo shut
> save
> irf-port-configuration active

On the second switch (slave) run the following commands to change the IRF ID to 2:

> irf member 1 renumber 2
> reboot

When the switch comes up, configure IRF ports:

> irf member 2 priority 30
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> shutdown
> irf-port 2/1
> port group interface FortyGigE 2/0/41
> irf-port 2/2
> port group interface FortyGigE 2/0/42
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> undo shut
> save
> irf-port-configuration active

Now you can connect the physical IRF ports. IRF is a ring topology, that means (in my case) port 1/0/41 should connect to 2/0/42 and port 1/0/42 should connect to 2/0/41.

Second switch will automatically reboot and if all is configured correctly, you should see both switches join the IRF fabric. Member switch 1 has the highest priority of 32 and becomes the master:

> display irf

Firmware Upgrade

Firmware upgrade is the next logical step after you set up IRF. The latest firmware revision for the switches can be download from HPE web-site. Keep in mind you will need a HPE passport account, with a valid service agreement (SAID) added to it.

You will also need a TFTP server to upgrade the firmware. There are a few of them out there, but the most commonly used is probably Tftpd64.

When you get the TFTP server up and running and copy the firmware file to it, perform an upgrade:

> tftp 10.10.10.20 get 5700-CMW710-R2432P03.ipe
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 1 main
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 2 main
> irf auto-update enable
> reboot

Confirm firmware has been updated:

> display version

VLANs, Aggregation Groups and Tagging

In Comware the term “aggregation group” is used to describe what is a “port channel” in Cisco world. Trunk/access ports are also called tagged/untagged ports throughout the documentation.

In this section we will discuss a few common port configuration scenarios:

  • Untagged ports, which can be your iSCSI storage array ports
  • Tagged ports, such as your VMware host uplinks
  • Aggregation groups, typically used for LAGs to upstream switches

First of all create all VLANs and give them descriptions:

> vlan 10
> description iSCSI
> vlan 20
> description Server
> vlan 30
> description Dev and test

Then specify untagged ports:

> vlan 10
> port te 1/0/1
> port te 2/0/1

To configure tagged ports and allow certain VLANs (ports will be added to the VLANs automatically):

> int te 1/0/2
> description ESX01 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30
> int te 2/0/2
> description ESX02 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30

And to create an LACP aggregation group:

> interface bridge-aggregation 1
> description Trunk to upstream switch
> link-aggregation mode dynamic
> port link-type trunk
> port trunk permit vlan 20 30

> interface te 1/0/3
> port link-aggregation group 1
> interface te 2/0/3
> port link-aggregation group 1

Common Commands

Other useful commands that don’t fall under any specific category, but handy to know.

Display switch configuration:

> display current-configuration

Save switch configuration:

> save

Shut down a port:

> int te 1/0/27
> shutdown

Undo a command:

> undo shutdown

Conclusion

Whether you are a network engineer new to the Comware operating system or a VMware administrator looking for a quick cheat sheet for FlexNetwork/FlexFabric switches, I hope this guide has helped you get the job done.

If this blog post gets the same amount of popularity, maybe it will turn into another series. But for now – over and out.

Advertisements

Dell Force10 Part 1: Initial Configuration

July 3, 2016

force10_S4048_on
When it comes to networking Dell has two main series of switches. PowerConnect/N-series, which run DNOS 6.x operating system. And S/Z-series switches, which run on DNOS 9.x derived from Force10 OS (FTOS). In this series of blogs we will go through the configuration of Force10 switch series and use Dell S4048-ON top of the rack switch as an example.

Interesting to note, that unlike other S-series switches S4048-ON is an Open Networking switch. Dell is one of the first companies which apart from its own OS lets customers run other operating systems on its network switches, such as Cumulus Linux OS and Big Switch Networks Switch Light OS. While Cumulus and Big Switch has its own use cases, in this blog we will look specifically at configuring FTOS.

Boot process

S4048-ON comes from the factory pre-configured for bare metal provisioning (BMP). This is what you will see when you boot the switch for the first time:

s4048_bmp

If you just want to boot FTOS, simply skip the BMP by choosing A and switch will boot the OS.

After some time BMP will time out. If you’ve missed the above wizard, you can also disable BMP from CLI using the following commands:

> enable
# stop bmp
# config
# reload-type normal-reload
# exit
# reload

When prompted choose to save the configuration and proceed with reload. After the switch has rebooted check that the next boot is set to normal reload:

# show reload-type

Initial configuration

First steps of any switch installation is assigning a hostname and management interface settings:

# hostname DELL4048-SWITCH
# int managementethernet 1/1
# ip address 172.10.10.2/24
# no shut
# management route 0.0.0.0/0 172.10.10.10

Then set admin / enable passwords and allow remote management via SSH:

# enable password 123456
# username admin password 123456
# ip ssh server enable

Configure time zone and NTP:

# clock timezone UTC 11
# ntp server 172.10.10.20
# show ntp associations
# show ntp status
# show clock

Firmware upgrade

Force10 switches have two boot banks A: and B:. It’s a good practice to upload new firmware into one boot bank and keep the old firmware in the other in case you need to roll back.

The easiest way to upgrade is via TFTP using Tftpd64, which you can download for free from here. If you’re upgrading an existing switch, make sure to save the running config and make a backup. If it’s an initial install you can skip this step.

# copy run start
# copy start tftp://10.0.0.1/FORCE10_SWITCH_01.01.16.conf

Then upload new firmware to image B:, change active boot bank to B: and reload:

# show version
# show boot system stack-unit 1
# upgrade system tftp://10.0.0.1/FTOS-SK-9.9.0.0P9.bin b:
# conf t
# boot system stack-unit 1 primary system b:
# exit
# reload

You will be prompted to save the configuration and reboot. After the reboot you may be asked to enable SupportAssist. SuppotAssist helps to automatically open Dell service tickets if there is a switch fault. You can enable SupportAssist by running the following commands and answering prompts:

supportassist

# conf t
# support-assist activate
# support-assist activity full-transfer start now
# show support-assist status

My pair of switches were configured in a Virtual Link Trunking (VLT) domain. I’ll explain how VLT works later in the series. But from the upgrade point of view, each switch in a VLT domain is treated as a separate switch and has to be upgraded separately. If you decided to use a stack instead of VLT, you can find the upgrade process for a Force10 stack in my other post about Dell MXL switches here.

Spanning tree

Spanning Tree Protocol (STP) helps to prevent network topology loops and is highly recommended for use in any network. Switches connected in an actual loop topology in today’s networks are rare. But STP can save you from consequences of a potential human error, such as port channel misconfiguration. If instead of creating one port channel with two links, you by mistake create two port channels with one link each and both carry the same VLANs, you’ve accidentally created a loop, which will bring your whole network to an immediate halt.

It’s a good practice to enable STP as a safeguard mechanism from such configuration errors. S4048-ON supports STP, RSTP, MSTP and PVST+. In my case S4048s were uplinked into HP core, which supported STP, RSTP and MSTP. If you have Cisco switches in your network core you can use PVST+. In my case I used RSTP, which is a good choice if you don’t require enhancements of MSTP and PVST+ in your network. Just make sure to not use the basic STP protocol, as it provides the slowest convergence.

# protocol spanning-tree rstp
# no disable
# show spanning-tree rstp

In every STP topology there is also a root switch, which by default is selected automatically. For a more deterministic STP behaviour it’s recommended to select the root switch manually, by assigning the lowest STP priority to it. Typically your core switch should be your root switch. In my case it was a HP core switch, which was assigned priority of “0”.

When configuring server and storage facing ports make sure to enable EdgePort mode to minimize the time it takes for the port to come online:

# int range Te1/45-1/48
# spanning-tree rstp edge-port
# switchport
# no shut

If you want to know more about how STP works, you can read a few of my previous blog posts on STP here and here.

Flow control

To avoid dropped packets on 10Gb switch ports at times of potential heavy utilization it is also a best practice to as a minimum enable bi-directional Flow Control on the storage array ports. I enabled it on the iSCSI links connected from the Dell Compellent storage array:

# int range Te1/17-1/18
# flowcontrol rx on tx on

If you specifically interested in switch best practices for Compellent and EqualLogic storage arrays, Dell has a full list of guides for various switches at communitites wiki here.

Port channels and VLANs

Port channels and VLANs are configured similarly to any other switch, but I include them here in case you want to know the syntax. In this example we have two access ports 1/46 and 1/47 and an uplink to the core configured as port channel 1:

# interface port-channel 1
# switchport
# no shutdown

# interface range Te1/1-1/2
# port-channel-protocol LACP
# port-channel 1 mode active
# no shutdown

# int vlan 254
# untagged Te1/46-1/47
# tagged po 1

Keep in mind, that port channels are used either in one switch configurations or when two or more switches are stacked together. If you’re using Virtual Link Trunking (VLT), you will need to create Virtual Link Trunks (VLTs). Which are similar to port channels, but have a slightly different syntax. We will talk about VLT in much more detail in the following Force10 blogs.

Conclusion

One feature which I didn’t specifically mentioned in this blog post was Jumbo Frames. I tend not to use it in my deployments until I see convincing evidence of it making a difference for iSCSI/NFS storage implementations. I did a post about Jumbo Frames long time ago here and hasn’t changed my opinion ever since. Interested to here your thoughts if have a different take on that.

References

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.

Beginner’s Guide to Dell N4000 Series Switches

January 18, 2016

Dell N-Series switches run on Dell Network Operating System (DNOS) version 6.x. Unlike Dell S-Series switches which run on DNOS 9.x, derived from  Force10 Operation System (FTOS), DNOS 6.x came from the PowerConnect switch series and share the same codebase. So if you’ve ever worked with PowerConnect switches, N-Series syntax should be very familiar.

In my case I had two Dell N4032F switches. But the same set of commands applies to any other N4000 Series switch.

Initial Configuration

When you first turn the switch on, it gives you 60 seconds to enter the wizard, where you can set up network settings for the Out-of-Band (OOB) management interface and change the admin password. If you miss it you can reboot the switch and it will show the same wizard prompt again when it boots up. Or you can set it up from the CLI:

# interface out-of-band
# ip address 10.10.10.10 255.255.255.0 10.10.10.254

# show ip interface out-of-band

Once you get to the CLI prompt, configure hostname and enable SSH:

# hostname n4032f-prod

# crypto key generate rsa
# crypto key generate dsa
# ip ssh server
# ip telnet server disable

Stacking

Dell N4000 Series switches support both stacking and MLAG (Multi-chassis Link Aggregation). One of the drawbacks of the stack configuration is disruptive firmware upgrades. When you update firmware on the stack master, firmware is distributed to all stack members and all switches are rebooted simultaneously.

In MLAG each switch has its own Control Plane and can be rebooted independently. Which is MLAG’s shortcoming at the same time, because unlike stack, where all units act as one switch, in MLAG you have to manage each switch separately.

In my case I chose stacking for its simplicity.

Dell N4000

N4000 switches are stacked using the two 40Gb QSFP ports located at the front. QSFP ports are not configured in stack mode by default. Which you need to change on both switches before you can build a stack:

# stack
# stack-port Fortygigabitethernet 1/1/1 stack
# stack-port Fortygigabitethernet 1/1/2 stack

# show switch stack-ports

Once QSFP ports on both switches are configured, disconnect power from both switches and boot the switch you want to be the stack master first (typically the top switch). When the first switch has fully booted, boot the second switch and check the status. This is what you should see:

# show switch

n4000_stack

Firmware Upgrade

If it’s not a brand new switch, save the config before doing the firmware upgrade:

# copy run start
# copy running-config tftp://10.10.10.100/backup.txt

You can use any TFTP server for the firmware upgrade, such as the free Tftpd64 server.

tftpd64

Then you upload the firmware image to the stack master and reload the stack:

# copy tftp://10.10.10.100/N4000v6.2.7.2.stk backup
# boot system backup
# reload
# show version

Firmware is uploaded to a backup image. Then you select the backup image for the next boot and reload the stack. When both switches reboot you should see something similar to this:

frimware_upgraded

As part of the upgrade process the new firmware is automatically uploaded from the master to all stack members, which is a default behaviour. You can confirm it is enabled using the following command:

# show auto-copy-sw

Flow Control, Jumbo Frames and iSCSI Optimization

In my case I used two N4032F switches for an iSCSI backbone, so I needed to make sure that Flow Control and Jumbo Frames are enabled on the switch.

Flow Control is enabled by default, which you can confirm by the following command:

# show storm-control

To globally enable Jumbo Frames on all ports type:

# system jumbo mtu 9216

# show system mtu

Interestingly, Dell N4000 Series switches also have built-in iSCSI optimization, which can detect iSCSI sessions by snooping the traffic on ports 3260 and 860. It then prioritizes iSCSI traffic over the other types of traffic to guarantee low latency for storage I/O. To show iSCSI settings:

# show iscsi

By default switches only track the sessions. Traffic prioritization is disabled by default and has to be enabled manually. This didn’t matter in my case, as the switches were dedicated for storage traffic. But if you share switches between storage and server traffic, you may want to enable it. Refer to the switch User’s Configuration Guide for details.

If you’re using a Dell Compellent storage array with N4000 switches, also make sure to apply a Compellent profile to the ports where storage array is connected to:

# macro global apply profile-compellent-nas $interface_name te1/0/1
# macro global apply profile-compellent-nas $interface_name te1/0/2
# macro global apply profile-compellent-nas $interface_name te1/0/3
# macro global apply profile-compellent-nas $interface_name te1/0/4

VLANs, Trunks and Port Channels

Again, I didn’t use any VLANs and Trunks, because switches were dedicated for iSCSI traffic and were separate from the LAN core. And I didn’t need Port Channels either, as they are not required for iSCSI.

Your scenario might be different. For instance, if you have vSphere hosts connected to a NetApp array over NFS, you may want to create a Multi-Mode (LACP) VIF on the NetApp side. If that’s the case, to create a port channel on the Multi-Mode VIF ports use the following:

# interface range te1/0/2,te2/0/2
# channel-group 1 mode active
# show intefaces po1

If the switches are used for both storage and VM traffic, then you’ll need to configure the server ports and uplink them to your network core. Create your VLANs first:

# vlan 10,20,30

Configure vSwitch uplinks from the ESXi hosts. In a typical vSphere environment, traffic is tagged on the vSwitch side, which means that server ports should be configured as trunks:

# interface range te1/0/3-6,te2/0/3-6
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30

And finally configure uplinks to the network core. Depending on how your LAN core is set up, you may want to create a port channel to the upstream switch and trunk the required VLANs:

# interface range te1/0/1,te2/0/1
# channel-group 2 mode active
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30
# show intefaces po2

Conclusion

This guide didn’t include information on Spanning Tree, QoS or any of the switch Layer 3 features, but I hope it could get you started. At the end of the day, every environment is different. If you need additional information refer to the following guides from the Dell web-site:

 

Force10 MXL: Initial Configuration

March 14, 2015

Continuing a series of posts on how to deal with Force10 MXL switches. This one is about VLANs, port channels, tagging and all the basic stuff. It’s not much different from other vendors like Cisco or HP. At the end of the day it’s the same networking standards.

If you want to match the terminology with Cisco for instance, then what you used to as EtherChannels is Port Channels on Force10. And trunk/access ports from Cisco are called tagged/untagged ports on Force10.

Configure Port Channels

If you are after dynamic LACP port channels (as opposed to static), then they are configured in two steps. First step is to create a port channel itself:

# conf t
# interface port-channel 1
# switchport
# no shutdown

And then you enable LACP on the interfaces you want to add to the port channel. I have a four switch stack and use 0/.., 1/.. type of syntax:

# conf t
# int range te0/51-52 , te1/51-52 , te2/51-52 , te3/51-52
# port-channel-protocol lacp
# port-channel 1 mode active

To check if the port channel has come up use this command. Port channel obviously won’t init if it’s not set up on the other side of the port channel as well.

# show int po1 brief

port_channel

Configure VLANs

Then you create your VLANs and add ports. Typically if you have vSphere hosts connected to the switch, you tag traffic on ESXi host level. So both your host ports and port channel will need to be added to VLANs as tagged. If you have any standalone non-virtualized servers – you’ll use untagged.

# conf t
# interface vlan 120
# description Management
# tagged Te0/1-4
# tagged Te2/1-4
# tagged Po1
# no shutdown
# copy run start

I have four hosts. Each host has a dual-port NIC which connects to two fabrics – switches 0 and 2 in the stack (1 port per fabric). I allow VLAN 120 traffic from these ports through the port channel to the upstream core switch.

You’ll most likely have more than one VLAN. At least one for Management and one for Production if it’s vSphere. But process for the rest is exactly the same.

The other switch

Just to give you a whole picture I’ll include the configuration of the switch on the other side of the trunk. I had a modular HP switch with 10Gb modules. A config for it would look like the following:

# conf t
# trunk I1-I8 trk1 lacp
# vlan 120 tagged trk1
# write mem

I1 to I8 here are ports, where I – is the module and 1 to 8 are ports within that module.

Spanning Tree Protocol Overview

July 16, 2012

When it comes to switching it is recommended to understand how STP works. STP was developed to prevent loops. For example, you connect 3 switches in a ring, some host sends a broadcast packet. Since broadcast packet is flooded to all ports (forget about VLANs for a moment) it will travel several times around the ring until its TTL is equal to 0. This situation will never happen if you work on Cisco switches. They have STP enabled by default. Some low-budget switches do not support STP at all.

To prevent loops STP disables some ports or in other words put them in a blocking state. Ports that are left to forward traffic are in a forwarding state. To exchange STP information switches use Bridge Protocol Data Units (BPDU). They contain three main fields: root switch ID, sender switch ID and cost to reach the root. ID is almost random and are based on priorities and MACs. Cost depends on link speed. 100Mb port’s priority equals to 19, 1Gb is 4, etc.

STP starts from electing a root switch. All switches exchange their IDs and switch with the lowest ID becomes a root switch. As stated above root switch is almost a random choice, but you can manually assign priority if needed. Then spanning tree algorithm (STA) searches for root ports (RP) and designated ports (DP). RP is a port with the shortest path to the root switch. Shortest path is founded based on link weights and if they are equal on switch IDs. DP is a port with the lowest cost to the root on that Ethernet segment. Ethernet segment here is a collision domain, which in its turn in switched network is simply an Ethernet link between two switches. Basically, that means that you will have one shortest path from each non-root switch to the root switch. On one side of each link will be a RP and on the other a DP port. All non-shortest paths will have DP on one side and non-DP non-RP  (blocked) port on the other side. Traffic will not traverse through this port to prevent loops.

You may ask, what’s the point of such distinction between DP and RP in this concept if the only thing that matters is the shortest path. Even though RP and DP lies on the shortest path to the root, just from the opposite sides, there is one significant distinction between them. DP is the port from which Hello BPDUs are continuously sent. Hello BPDU simply indicates that link between switches is working and contains information which allows switch on the other side of the link to find the new shortest path to the root in case an old link brakes. Another difference is that DPs exist not only on root paths, but on each of the Ethernet links.

Along with STP, there is a RSTP, which stands for Rapid Spanning Tree Protocol. The reason for RSTP is that STP converges slowly. Convergence is a process which happens when network topology changes and switches need to reevaluate port statuses (blocking/forwarding). STP converges for approximately 50 seconds. RSTP convergence time is 1 to 10 seconds.

STP and RSTP have several implementations. Cisco by default uses PVST+ (or simply PVST) which is an abbrevation for Per-VLAN Spanning Tree Plus, instead o pure IEEE’s STP. PVST creates one STP topology per VLAN. Instead of using one link for all VLANs and block all other links, you can use first link for even VLANs and second for odd. PVST allows you to do that. Cisco’s implementation of RSTP is called PVRST (Per-VLAN Rapid Spanning Tree) or RPVST (Rapid Per-VLAN Spanning Tree). There is an IEEE implementation of protocol similar to PVRST. It’s called MIST – Multiple Instances of Spanning Trees. MIST is an implementation of RSTP. MIST’s difference from PVRST is that it doesn’t create separate STP for each VLAN as PVRST does by design, but lets you create one STP for multiple VLANs.

VLANs, trunking and VTP

July 3, 2012

Virtual LANs

If you would think of pure Level 2 switch then all hosts connected to it are considered as a single LAN, even though they might be in several different LANs. It means that when a broadcast frame (or frame to a host with an unknown MAC) comes in, it is flooded to all ports. It’s insecure and it can overwhelm mid-size to large networks. And this is the reason why concept of VLANs, as well as IEEE 802.1Q, ISL and VTP protocols were developed. VLAN segments Ethernet traffic to a number of particular ports. In almost all cases VLAN consists of hosts from one network. To create VLAN you run:

configure terminal interface range Fastethernet 0/15 – 16 switchport access vlan 2

Since VLAN 2 doesn’t exist it is created. Ports 15 and 16 are included in the VLAN 2. VLAN 1 is a default VLAN where all ports initially are and is reserved.

VLAN Trunking

Now lets consider situation when you have hosts from one network connected to two switches. It’s rare, but possible. For example you have a network with 100 Mbit devices (tape library, UPS NMC) and 1000 Mbit devices (storage, servers) and you don’t want to waste 1000 ports on 100 devices and connect them to a second 100 Mbit switch. Now when host from one switch sends packet to the unknown host from another switch (or send a broadcast frame) and packet is flooded, switch on the other side needs to know what VLAN it goes from. Otherwise, switch has to discard it, since it floods frames only inside VLANs and VLAN ID is unknown in this case. Here you need to configure the link between the switches as trunk. It means that before sending the packet switch will mark it with VLAN ID and the other switch will forward it only to ports from this VLAN. There are two VLAN trunking protocols: proprietary Cisco ISL (outdated) and IEEE 802.1Q (most used). By default Cisco switches are configured to negotiate to use trunking if asked to do so. But you need to configure switch from either side to initiate negotiating:

configure terminal interface gigabit 0/1 switchport mode dynamic desirable

Rationale behind trunking

Networks splitted between switches is not that frequent case. Say, you want to use VLANs for security and/or efficiency reasons but each particular network is bounded to one switch.  All broadcast and unicast traffic to hosts within the same network do not travel outside the switch where it is connected. And unicast traffic to other networks can travel right to the router (according to basic routing rules) and from the router down to the particular host. Corresponding port where destination host is connected can be identified using destination MAC. It seems that nobody needs to know VLAN IDs in this case. And the question is: “Do you need trunking here?”. And the answer is – yes.

It’s worth starting by saying that ports on Cisco switch can be either access – where end hosts are connected and trunk – links between switches or routers. So when packet travel through trunk port it’s marked using tag by design. There are several reasons behind that. The most simple answer to this question is ARP requests. When router receives packet to route to another network it first needs to know MAC of the destination host. To find it out, router sends ARP request which is a broadcast packet. If there is no VLAN tag on this ARP request it would have to be flooded on all ports on all switches along the path to the destination. And it would break VLAN concept in its core – broadcast traffic has to be limited to the particular VLAN.

Another reason for marking each packet with VLAN ID is efficiency. When switch receives packet and looks up for destination in its MAC address table it’s faster to find MAC, when MAC addresses are grouped by VLAN ID. Switch doesn’t need to look through all MACs, but only those which are in the same VLAN.

In fact, there are many other reasons for using VLAN tags by default. I gave two, which answer the question without digging into details.

VLAN Trunking Protocol

There is an another Cisco proprietary feature called VTP. VTP exchanges information about VLAN IDs and names. It means you configure particular VLAN once on one switch and then all switches will pull this information from it. Not frequently used feature, so I won’t describe it in detail.

Switching Logic

June 8, 2012

If you are a junior admin in a small to medium organization then building campus network is simple. Buy several switches, connect desktops and switches together and that’s it. You don’t need any additional configuration, all switches work right out of the box. However, it’s important to understand how packet switching work to troubleshoot problems that can show up later in your work.

Switching works on TCP/IP Layer 2. It means that networking hardware logic operates with MAC addresses. Each time switch receives a packet from any workstation or server it remembers its MAC address and port it was received from. It’s called MAC address or switching table. When somebody wants to send a packet to an other host with particular IP address he sends an ARP request packet. Like tell me who has 12.34.56.78 IP address. Host replies with its MAC address and sender can form a package to it.

Initially switch has empty switching table and does not know where to send packets. When switch doesn’t have particular MAC address in its table it forwards (floods) the packet to all ports. If the next switch doesn’t know this MAC, it further forwards the packet. When packet finally reaches its destination, host answers and switch adds its MAC address into the table.

If you don’t use VLANs, all switches in your network form a broadcast domain. It means that when host sends a broadcast message, ARP request for example, and host with this IP address is powered off then this ARP request will traverse the whole network. It’s important to bear in mind that if you have many hosts in your network, broadcast messages can eventually slow it down. VLANs are usually a solution here.