Posts Tagged ‘Compellent’

Dell Force10 Part 1: Initial Configuration

July 3, 2016

force10_S4048_on
When it comes to networking Dell has two main series of switches. PowerConnect/N-series, which run DNOS 6.x operating system. And S/Z-series switches, which run on DNOS 9.x derived from Force10 OS (FTOS). In this series of blogs we will go through the configuration of Force10 switch series and use Dell S4048-ON top of the rack switch as an example.

Interesting to note, that unlike other S-series switches S4048-ON is an Open Networking switch. Dell is one of the first companies which apart from its own OS lets customers run other operating systems on its network switches, such as Cumulus Linux OS and Big Switch Networks Switch Light OS. While Cumulus and Big Switch has its own use cases, in this blog we will look specifically at configuring FTOS.

Boot process

S4048-ON comes from the factory pre-configured for bare metal provisioning (BMP). This is what you will see when you boot the switch for the first time:

s4048_bmp

If you just want to boot FTOS, simply skip the BMP by choosing A and switch will boot the OS.

After some time BMP will time out. If you’ve missed the above wizard, you can also disable BMP from CLI using the following commands:

> enable
# stop bmp
# config
# reload-type normal-reload
# exit
# reload

When prompted choose to save the configuration and proceed with reload. After the switch has rebooted check that the next boot is set to normal reload:

# show reload-type

Initial configuration

First steps of any switch installation is assigning a hostname and management interface settings:

# hostname DELL4048-SWITCH
# int managementethernet 1/1
# ip address 172.10.10.2/24
# no shut
# management route 0.0.0.0/0 172.10.10.10

Then set admin / enable passwords and allow remote management via SSH:

# enable password 123456
# username admin password 123456
# ip ssh server enable

Configure time zone and NTP:

# clock timezone UTC 11
# ntp server 172.10.10.20
# show ntp associations
# show ntp status
# show clock

Firmware upgrade

Force10 switches have two boot banks A: and B:. It’s a good practice to upload new firmware into one boot bank and keep the old firmware in the other in case you need to roll back.

The easiest way to upgrade is via TFTP using Tftpd64, which you can download for free from here. If you’re upgrading an existing switch, make sure to save the running config and make a backup. If it’s an initial install you can skip this step.

# copy run start
# copy start tftp://10.0.0.1/FORCE10_SWITCH_01.01.16.conf

Then upload new firmware to image B:, change active boot bank to B: and reload:

# show version
# show boot system stack-unit 1
# upgrade system tftp://10.0.0.1/FTOS-SK-9.9.0.0P9.bin b:
# conf t
# boot system stack-unit 1 primary system b:
# exit
# reload

You will be prompted to save the configuration and reboot. After the reboot you may be asked to enable SupportAssist. SuppotAssist helps to automatically open Dell service tickets if there is a switch fault. You can enable SupportAssist by running the following commands and answering prompts:

supportassist

# conf t
# support-assist activate
# support-assist activity full-transfer start now
# show support-assist status

My pair of switches were configured in a Virtual Link Trunking (VLT) domain. I’ll explain how VLT works later in the series. But from the upgrade point of view, each switch in a VLT domain is treated as a separate switch and has to be upgraded separately. If you decided to use a stack instead of VLT, you can find the upgrade process for a Force10 stack in my other post about Dell MXL switches here.

Spanning tree

Spanning Tree Protocol (STP) helps to prevent network topology loops and is highly recommended for use in any network. Switches connected in an actual loop topology in today’s networks are rare. But STP can save you from consequences of a potential human error, such as port channel misconfiguration. If instead of creating one port channel with two links, you by mistake create two port channels with one link each and both carry the same VLANs, you’ve accidentally created a loop, which will bring your whole network to an immediate halt.

It’s a good practice to enable STP as a safeguard mechanism from such configuration errors. S4048-ON supports STP, RSTP, MSTP and PVST+. In my case S4048s were uplinked into HP core, which supported STP, RSTP and MSTP. If you have Cisco switches in your network core you can use PVST+. In my case I used RSTP, which is a good choice if you don’t require enhancements of MSTP and PVST+ in your network. Just make sure to not use the basic STP protocol, as it provides the slowest convergence.

# protocol spanning-tree rstp
# no disable
# show spanning-tree rstp

In every STP topology there is also a root switch, which by default is selected automatically. For a more deterministic STP behaviour it’s recommended to select the root switch manually, by assigning the lowest STP priority to it. Typically your core switch should be your root switch. In my case it was a HP core switch, which was assigned priority of “0”.

When configuring server and storage facing ports make sure to enable EdgePort mode to minimize the time it takes for the port to come online:

# int range Te1/45-1/48
# spanning-tree rstp edge-port
# switchport
# no shut

If you want to know more about how STP works, you can read a few of my previous blog posts on STP here and here.

Flow control

To avoid dropped packets on 10Gb switch ports at times of potential heavy utilization it is also a best practice to as a minimum enable bi-directional Flow Control on the storage array ports. I enabled it on the iSCSI links connected from the Dell Compellent storage array:

# int range Te1/17-1/18
# flowcontrol rx on tx on

If you specifically interested in switch best practices for Compellent and EqualLogic storage arrays, Dell has a full list of guides for various switches at communitites wiki here.

Port channels and VLANs

Port channels and VLANs are configured similarly to any other switch, but I include them here in case you want to know the syntax. In this example we have two access ports 1/46 and 1/47 and an uplink to the core configured as port channel 1:

# interface port-channel 1
# switchport
# no shutdown

# interface range Te1/1-1/2
# port-channel-protocol LACP
# port-channel 1 mode active
# no shutdown

# int vlan 254
# untagged Te1/46-1/47
# tagged po 1

Keep in mind, that port channels are used either in one switch configurations or when two or more switches are stacked together. If you’re using Virtual Link Trunking (VLT), you will need to create Virtual Link Trunks (VLTs). Which are similar to port channels, but have a slightly different syntax. We will talk about VLT in much more detail in the following Force10 blogs.

Conclusion

One feature which I didn’t specifically mentioned in this blog post was Jumbo Frames. I tend not to use it in my deployments until I see convincing evidence of it making a difference for iSCSI/NFS storage implementations. I did a post about Jumbo Frames long time ago here and hasn’t changed my opinion ever since. Interested to here your thoughts if have a different take on that.

References

Advertisement

Dell Compellent is not an ALUA Storage Array

May 16, 2016

dell_compellentDell Compellent is Dell’s flagship storage array which competes in the market with such rivals as EMC VNX and NetApp FAS. All these products have slightly different storage architectures. In this blog post I want to discuss what distinguishes Dell Compellent from the aforementioned arrays when it comes to multipathing and failover. This may help you make right decisions when designing and installing a solution based on Dell Compellent in your production environment.

Compellent Array Architecture

In one of my previous posts I showed how Compellent LUNs on vSphere ESXi hosts are claimed by VMW_SATP_DEFAULT_AA instead of VMW_SATP_ALUA SATP, which is the default for all ALUA arrays. This happens because Compellent is not actually an ALUA array and doesn’t have the tpgs_on option enabled. Let’s digress for a minute and talk about what the tpgs_on option actually is.

For a storage array to be claimed by VMW_SATP_ALUA it has to have the tpgs_on option enabled, as indicated by the corresponding SATP claim rule:

# esxcli storage nmp satp rule list

Name                 Transport  Claim Options Description
-------------------  ---------  ------------- -----------------------------------
VMW_SATP_ALUA                   tpgs_on       Any array with ALUA support

This is how Target Port Groups (TPG) are defined in section 5.8.2.1 Introduction to asymmetric logical unit access of SCSI Primary Commands – 3 (SPC-3) standard:

A target port group is defined as a set of target ports that are in the same target port asymmetric access state at all times. A target port group asymmetric access state is defined as the target port asymmetric access state common to the set of target ports in a target port group. The grouping of target ports is vendor specific.

This has to do with how ports on storage controllers are grouped. On an ALUA array even though a LUN can be accessed through either of the controllers, paths only to one of them (controller which owns the LUN) are Active Optimized (AO) and paths to the other controller (non-owner) are Active Non-Optimized (ANO).

Compellent does not present LUNs through the non-owning controller. You can easily see that if you go to the LUN properties. In this example we have four iSCSI ports connected (two per controller) on the Compellent side, but we can see only two paths, which are the paths from the owning controller.

compellent_psp

If Compellent presents each particular LUN only through one controller, then how does it implement failover? Compellent uses a concept of fault domains and control ports to handle LUN failover between controllers.

Compellent Fault Domains

This is Dell’s definition of a Fault Domain:

Fault domains group front-end ports that are connected to the same Fibre Channel fabric or Ethernet network. Ports that belong to the same fault domain can fail over to each other because they have the same connectivity.

So depending on how you decided to go about your iSCSI network configuration you can have one iSCSI subnet / one fault domain / one control port or two iSCSI subnets / two fault domains / two control ports. Either of the designs work fine, this is really is just a matter of preference.

You can think of a Control Port as a Virtual IP (VIP) for the particular iSCSI subnet. When you’re setting up iSCSI connectivity to a Compellent, you specify Control Ports IPs in Dynamic Discovery section of the iSCSI adapter properties. Which then redirects the traffic to the actual controller IPs.

If you go to the Storage Center GUI you will see that Compellent also creates one virtual port for every iSCSI physical port. This is what’s called a Virtual Port Mode and is recommended instead of a Physical Port Mode, which is the default setting during the array initialization.

Failover scenarios

Now that we now what fault domains are, let’s talk about the different failover scenarios. Failover can happen on either a port level when you have a transceiver / cable failure or a controller level, when the whole controller goes down or is rebooted. Let’s discuss all of these scenarios and their variations one by one.

1. One Port Failed / One Fault Domain

If you use one iSCSI subnet and hence one fault domain, when you have a port failure, Compellent will move the failed port to the other port on the same controller within the same fault domain.

port_failed

In this example, 5000D31000B48B0E and 5000D31000B48B0D are physical ports and 5000D31000B48B1D and 5000D31000B48B1C are the corresponding virtual ports on the first controller. Physical port 5000D31000B48B0E fails. Since both ports on the controller are in the same fault domain, controller moves virtual port 5000D31000B48B1D from its original physical port 5000D31000B48B0E to the physical port 5000D31000B48B0D, which still has connection to network. In the background Compellent uses iSCSI redirect command on the Control Port to move the traffic to the new virtual port location.

2. One Port Failed / Two Fault Domains

Two fault domains scenario is slightly different as now on each controller there’s only one port in each fault domain. If any of the ports were to fail, controller would not fail over the port. Port is failed over only within the same controller/domain. Since there’s no second port in the same fault domain, the virtual port stays down.

port_failed_2

A distinction needs to be made between the physical and virtual ports here. Because from the physical perspective you lose one physical link in both One Fault Domain and Two Fault Domains scenarios. The only difference is, since in the latter case the virtual port is not moved, you’ll see one path down when you go to LUN properties on an ESXi host.

3. Two Ports Failed

This is the scenario which you have to be careful with. Compellent does not initiate a controller failover when all front-end ports on a controller fail. The end result – all LUNs owned by this controller become unavailable.

two_ports_failed_2

luns_down

This is the price Compellent pays for not supporting ALUA. However, such scenario is very unlikely to happen in a properly designed solution. If you have two redundant network switches and controllers are cross-connected to both of them, if one switch fails you lose only one link per controller and all LUNs stay accessible through the remaining links/switch.

4. Controller Failed / Rebooted

If the whole controller fails the ports are failed over in a similar fashion. But now, instead of moving ports within the controller, ports are moved across controllers and LUNs come across with them. You can see how all virtual ports have been failed over from the second (failed) to the first (survived) controller:

controller_failed

Once the second controller gets back online, you will need to rebalance the ports or in other words move them back to the original controller. This doesn’t happen automatically. Compellent will either show you a pop up window or you can do that by going to System > Setup > Multi-Controller > Rebalance Local Ports.

Conclusion

Dell Compellent is not an ALUA storage array and falls into the category of Active/Passive arrays from the LUN access perspective. Under such architecture both controller can service I/O, but each particular LUN can be accessed only through one controller. This is different from the ALUA arrays, where LUN can be accessed from both controllers, but paths are active optimized on the owning controller and active non-optimized on the non-owning controller.

From the end user perspective it does not make much of a difference. As we’ve seen, Compellent can handle failover on both port and controller levels. The only exception is, Compellent doesn’t failover a controller if it loses all front-end connectivity, but this issue can be easily avoided by properly designing iSCSI network and making sure that both controllers are connected to two upstream switches in a redundant fashion.

Changing the Default PSP for Dell Compellent

April 26, 2016

dell_compellentIf you’ve ever worked with Dell Compellent storage arrays you may have noticed that when you initially connect it to a VMware ESXi host, by default VMware Native Multipathing Plugin (NMP) uses Fixed Path Selection Policy (PSP) for all connected LUNs. And if you have two ports on each of the controllers connected to your storage area network (be it iSCSI or FC), then you’re wasting half of your bandwidth.

compellent_psp

Why does that happen? Let’s dig deep into VMware’s Pluggable Storage Architecture (PSA) and see how it treats Compellent.

How Compellent is claimed by VMware NMP

If you are familiar with vSphere’s Pluggable Storage Architecture (PSA) and NMP (which is the only PSA plug-in that every ESXi host has installed by default), then you may know that historically it’s always had specific rules for such Asymmetric Logical Unit Access (ALUA) arrays as NetApp FAS and EMC VNX.

Run the following command on an ESXi host and you will see claim rules for NetApp and DGC devices (DGC is Data General Corporation, which built Clariion array that has been later re-branded as VNX by EMC):

# esxcli storage nmp satp rule list

Name              Vendor  Default PSP Description
----------------  ------- ----------- -------------------------------
VMW_SATP_ALUA_CX  DGC                 CLARiiON array in ALUA mode
VMW_SATP_ALUA     NETAPP  VMW_PSP_RR  NetApp arrays with ALUA support

This tells NMP to use Round-Robin Path Selection Policy (PSP) for these arrays, which is always preferable if you want to utilize all available active-optimized paths. You may have noticed that there’s no default PSP in the VNX claim rule, but if you look at the default PSP for the VMW_SATP_ALUA_CX Storage Array Type Plug-In (SATP), you’ll see that it’s also Round-Robin:

# esxcli storage nmp satp list

Name              Default PSP  
----------------- -----------
VMW_SATP_ALUA_CX  VMW_PSP_RR

There is, however, no default claim rule for Dell Compellent storage arrays. There are a handful of the following non array-specific “catch all” rules:

Name                 Transport  Claim Options Description
-------------------  ---------  ------------- -----------------------------------
VMW_SATP_ALUA                   tpgs_on       Any array with ALUA support
VMW_SATP_DEFAULT_AA  fc                       Fibre Channel Devices
VMW_SATP_DEFAULT_AA  fcoe                     Fibre Channel over Ethernet Devices
VMW_SATP_DEFAULT_AA  iscsi                    iSCSI Devices

As you can see, the default PSP for VMW_SATP_ALUA is Most Recently Used (MRU) and for VMW_SATP_DEFAULT_AA it’s VMW_PSP_FIXED:

Name                Default PSP   Description
------------------- ------------- ------------------------------------------
VMW_SATP_ALUA       VMW_PSP_MRU
VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays

Compellent is not an ALUA storage array and doesn’t have the tpgs_on option enabled. As a result it’s claimed by the VMW_SATP_DEFAULT_AA rule for the iSCSI transport, which is why you end up with the Fixed path selection policy for all LUNs by default.

Changing the default PSP

Now let’s see how we can change the PSP from Fixed to Round Robin. First thing you have to do before even attempting to change the PSP is to check VMware Compatibility List to make sure that the round robin PSP is supported for a particular array and vSphere combination.

vmware_hcl

As you can see, round robin path selection policy is supported for Dell Compellent storage arrays in vSphere 6.0u2. So let’s change it to get the benefit of being able to simultaneously use all paths to Compellent controllers.

For Compellent firmware versions 6.5 and earlier use the following command to change the default PSP:

# esxcli storage nmp satp set -P VMW_PSP_RR -s VMW_SATP_DEFAULT_AA

Note: technically here you’re changing PSP not specifically for the Compellent storage array, but for any array which is claimed by VMW_SATP_DEFAULT_AA and which also doesn’t have an individual SATP rule with PSP set. Make sure that this is not the case or you may accidentally change PSP for some other array you may have in your environment.

The above will change PSP for any newly provisioned and connected LUNs. For any existing LUNs you can change PSP either manually in each LUN’s properties or run the following command in PowerCLI:

# Get-Cluster ClusterNameHere | Get-VMHost | Get-ScsiLun | where {$_.Vendor -eq
“COMPELNT” –and $_.Multipathpolicy -eq “Fixed”} | Set-ScsiLun -Multipathpolicy
RoundRobin

This is what you should see in LUN properties as a result:

compellent_psp_2

Conclusion

By default any LUN connected from a Dell Compellent storage array is claimed by NMP using Fixed path selection policy. You can change it to Round Robin using the above two simple commands to make sure you utilize all storage paths available to ESXi hosts.

Dell Compellent Enterprise Manager: SQL Server Setup

January 25, 2016

Dell Compellent Enterprise Manager is a separate piece of software, which comes with every Compellent storage array deployment and allows you to monitor, manage, and analyze one or multiple arrays from a centralized management console.

Probably the most valuable feature of Enterprise Manager for an average user is its historical performance statistics. From the Storage Center GUI you can see only real-time data. Enterprise Manager is capable of keeping statistics for up to a year. And can obviously do a multitude of other things, such as assist with capacity planning, allow you to configure replication between production and DR sites, generate reports or simply serve as a single pain of glass interface to all of your Compellent storage arrays.

Enterprise Manager installation does not include an embedded database. If you want to deploy EM database on an existing Microsoft SQL database or install a new dedicated Microsoft SQL Express instance, you need to do it manually. The following guide describes how to install Enterprise manager with Microsoft SQL Server 2012 Express.

SQL Server Authentication

During installation, Enterprise Manager will ask you for authentication credentials to connect to the SQL database. In SQL Server you can use either the Windows Authentication Mode or Mixed Mode. Mixed Mode allows both Windows authentication and SQL Server authentication via local SQL Server accounts.

For a typical SQL Server setup, Microsoft does not recommend enabling SQL Server authentication and especially with the default “sa” account, as it’s seen as insecure. So if you have a centralized Microsoft SQL Server in your network you’ll most likely be using Microsoft Authentication. But for a dedicated SQL Server Express database it’s fine to enable Mixed Mode and use SQL Server authentication.

Make sure to enable Mixed Mode during SQL Server Express installation and enter a password for the “sa” account.

sql_authentication

SQL Server Network Configuration

Enterprise Manager uses TCP/IP to connect to the SQL database over port 1433. TCP/IP is not enabled by default in SQL Server Express, you will need to enable it manually.

sql_network

Use SQL Server Configuration Manager, which is installed with the database, and browse to SQL Server Network Configuration > Protocols for SQLEXPRESS > TCP/IP. Enable TCP/IP on the Protocols tab and assign port 1433 to TCP Port field in IPAll section.

Make sure to restart the SQL server to apply the settings and you should now be able to connect Enterprise Manager to the database.

If you get stuck, refer to Dell Compellent Enterprise Manager Installation Guide and specifically the section “Prepare a Microsoft SQL Server Database”. This guide is available in Dell Compellent Knowledge Center.

Beginner’s Guide to Dell N4000 Series Switches

January 18, 2016

Dell N-Series switches run on Dell Network Operating System (DNOS) version 6.x. Unlike Dell S-Series switches which run on DNOS 9.x, derived from  Force10 Operation System (FTOS), DNOS 6.x came from the PowerConnect switch series and share the same codebase. So if you’ve ever worked with PowerConnect switches, N-Series syntax should be very familiar.

In my case I had two Dell N4032F switches. But the same set of commands applies to any other N4000 Series switch.

Initial Configuration

When you first turn the switch on, it gives you 60 seconds to enter the wizard, where you can set up network settings for the Out-of-Band (OOB) management interface and change the admin password. If you miss it you can reboot the switch and it will show the same wizard prompt again when it boots up. Or you can set it up from the CLI:

# interface out-of-band
# ip address 10.10.10.10 255.255.255.0 10.10.10.254

# show ip interface out-of-band

Once you get to the CLI prompt, configure hostname and enable SSH:

# hostname n4032f-prod

# crypto key generate rsa
# crypto key generate dsa
# ip ssh server
# ip telnet server disable

Stacking

Dell N4000 Series switches support both stacking and MLAG (Multi-chassis Link Aggregation). One of the drawbacks of the stack configuration is disruptive firmware upgrades. When you update firmware on the stack master, firmware is distributed to all stack members and all switches are rebooted simultaneously.

In MLAG each switch has its own Control Plane and can be rebooted independently. Which is MLAG’s shortcoming at the same time, because unlike stack, where all units act as one switch, in MLAG you have to manage each switch separately.

In my case I chose stacking for its simplicity.

Dell N4000

N4000 switches are stacked using the two 40Gb QSFP ports located at the front. QSFP ports are not configured in stack mode by default. Which you need to change on both switches before you can build a stack:

# stack
# stack-port Fortygigabitethernet 1/1/1 stack
# stack-port Fortygigabitethernet 1/1/2 stack

# show switch stack-ports

Once QSFP ports on both switches are configured, disconnect power from both switches and boot the switch you want to be the stack master first (typically the top switch). When the first switch has fully booted, boot the second switch and check the status. This is what you should see:

# show switch

n4000_stack

Firmware Upgrade

If it’s not a brand new switch, save the config before doing the firmware upgrade:

# copy run start
# copy running-config tftp://10.10.10.100/backup.txt

You can use any TFTP server for the firmware upgrade, such as the free Tftpd64 server.

tftpd64

Then you upload the firmware image to the stack master and reload the stack:

# copy tftp://10.10.10.100/N4000v6.2.7.2.stk backup
# boot system backup
# reload
# show version

Firmware is uploaded to a backup image. Then you select the backup image for the next boot and reload the stack. When both switches reboot you should see something similar to this:

frimware_upgraded

As part of the upgrade process the new firmware is automatically uploaded from the master to all stack members, which is a default behaviour. You can confirm it is enabled using the following command:

# show auto-copy-sw

Flow Control, Jumbo Frames and iSCSI Optimization

In my case I used two N4032F switches for an iSCSI backbone, so I needed to make sure that Flow Control and Jumbo Frames are enabled on the switch.

Flow Control is enabled by default, which you can confirm by the following command:

# show storm-control

To globally enable Jumbo Frames on all ports type:

# system jumbo mtu 9216

# show system mtu

Interestingly, Dell N4000 Series switches also have built-in iSCSI optimization, which can detect iSCSI sessions by snooping the traffic on ports 3260 and 860. It then prioritizes iSCSI traffic over the other types of traffic to guarantee low latency for storage I/O. To show iSCSI settings:

# show iscsi

By default switches only track the sessions. Traffic prioritization is disabled by default and has to be enabled manually. This didn’t matter in my case, as the switches were dedicated for storage traffic. But if you share switches between storage and server traffic, you may want to enable it. Refer to the switch User’s Configuration Guide for details.

If you’re using a Dell Compellent storage array with N4000 switches, also make sure to apply a Compellent profile to the ports where storage array is connected to:

# macro global apply profile-compellent-nas $interface_name te1/0/1
# macro global apply profile-compellent-nas $interface_name te1/0/2
# macro global apply profile-compellent-nas $interface_name te1/0/3
# macro global apply profile-compellent-nas $interface_name te1/0/4

VLANs, Trunks and Port Channels

Again, I didn’t use any VLANs and Trunks, because switches were dedicated for iSCSI traffic and were separate from the LAN core. And I didn’t need Port Channels either, as they are not required for iSCSI.

Your scenario might be different. For instance, if you have vSphere hosts connected to a NetApp array over NFS, you may want to create a Multi-Mode (LACP) VIF on the NetApp side. If that’s the case, to create a port channel on the Multi-Mode VIF ports use the following:

# interface range te1/0/2,te2/0/2
# channel-group 1 mode active
# show intefaces po1

If the switches are used for both storage and VM traffic, then you’ll need to configure the server ports and uplink them to your network core. Create your VLANs first:

# vlan 10,20,30

Configure vSwitch uplinks from the ESXi hosts. In a typical vSphere environment, traffic is tagged on the vSwitch side, which means that server ports should be configured as trunks:

# interface range te1/0/3-6,te2/0/3-6
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30

And finally configure uplinks to the network core. Depending on how your LAN core is set up, you may want to create a port channel to the upstream switch and trunk the required VLANs:

# interface range te1/0/1,te2/0/1
# channel-group 2 mode active
# switchport mode trunk
# switchport trunk allowed vlan 10,20,30
# show intefaces po2

Conclusion

This guide didn’t include information on Spanning Tree, QoS or any of the switch Layer 3 features, but I hope it could get you started. At the end of the day, every environment is different. If you need additional information refer to the following guides from the Dell web-site:

 

Dell Compellent iSCSI Configuration

November 20, 2015

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References