Posts Tagged ‘MPIO’

Dell Compellent iSCSI Configuration

November 20, 2015

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References

Advertisement

Connecting VMware ESXi Hosts to NetApp: MPIO Configuration

May 23, 2013

fas3140aOverview

NetApp filers are active/active ALUA arrays. It means that you can access LUNs configured on one controller via the second one. But access to the partner’s LUNs is provided through the internal interconnect and is always slower. That’s why the paths to the controller through the partner are called “unoptimized”. Their primary usage is to provide backup paths in case of a failover.

Fixed path selection

VMware hosts by default use “VMW_SATP_DEFAULT_AA” Storage Array Type Policy (SATP) and “Fixed” Path Selection Policy (PSP) for active/active arrays. If ESXi host is configured with these SATP and PSP, it will access each LUN via one particular path, even if you have two FC ports on each of the controllers.

VMware host can’t automatically identify optimized path. So you can either set it manually or use functionality of NetApp Virtual Storage Console (VSC) plug-in for VMware. Just go to the Monitoring and Host Configuration -> Overview section of VSC, right click on ESXi host and click “Set Recommended Values”. If you don’t do that, ESXi hosts will run I/O traffic through a randomly identified path, which could turn out to be unoptimized. It means you will push heaps of I/O through the partner node and experience higher latencies.

You can check if you’re using non-optimized paths by looking for such warnings on NetApps:

filer_01> Mon May 6 10:30:45 EST [filer_01: ems.engine.inputSuppress:error]: Event ‘scsitarget.partnerPath.misconfigured’ suppressed 327 times since Mon May 6 09:30:48 EST 2013.
Mon May 6 10:30:45 EST [filer_01: scsitarget.partnerPath.misconfigured:error]: FCP Partner Path Misconfigured – Host I/O access through a non-primary and non-optimal path was detected.

Or run “lun stats -o” and look for huge numbers under “Partner Ops” and “Partner KBytes”.

ALUA configuration

If you want to utilize both links to the controller in a round robin fashion, you need to do some additional configuration. You should enable ALUA for your VMware ESXi hosts initiator group on NetApp:

igroup set <group> alua yes

Now you need to reboot ESXi host. After a reboot it will see that storage is ALUA-capable and change SATP to VMW_SATP_ALUA and PSP is “Most Recently Used”. To utilize load balancing between two controller paths you need to change PSP to “Round Robin”. Again, you can do that either manually or via VSC.

Note: Don’t ever use ALUA and VMW_SATP_ALUA if you have Windows Server 2003 MSCS or Windows Server 2008 Failover Cluster with shared RDM LUNs. It’s an unsupported configuration and you can run into a cluster failure situation. It’s described in many places:

In this case leave SATP as “VMW_SATP_DEFAULT_AA”,  PSP as “Fixed” and make sure that you use optimized paths.

Windows MPIO with IBM storage

September 17, 2012

IBM mid-range storage systems (like DS3950) work in active/passive mode. It means that access to each LUN is given through one controller, in constrast to active/active storage where data between host and two controllers can flow in round-robin fashion. So redundant path here is used only as a failover. Software which provides this failover functionality is called Multipath I/O (MPIO) and has implementations for all operating systems. I’ll desribe how to configure MPIO version for Windows.

Installation

Prior to Windows Server 2008, Microsoft didn’t have its own MPIO implementation and MPIO was distributed with IBM DS Storage Manager product. Now you can install MPIO from “Feautures” sub-menu of Windows Server 2008 Server Manager. After installation is complete you will find MPIO configuration options under Control Panel and in Administrative Tools.

IBM storage works well with default Windows MPIO implementation, however it’s recommended to install IBM MPIO (device-specific module) from Storage Manager installation bundle. In my case MPIO installation file was called SMIA-WSX64-01.03.0305.0608.

Enable multipathing

Initially you will see two hard drives for each LUN in Device Manager. You can enable MPIO for particular hardware ID (in other words, storage system) on Discover Multi-Paths tab of MPIO control panel. You can’t do that with LUN granularity. After you add selected devices and reboot, you will see them on “MPIO Devices” tab. Now each LUN will be seen as a single hard drive in Device Manager.

Configure preferred path

MPIO supports several load-balancing policies, which are configured on a LUN basis from MPIO tab of a hard drive in Device Manager. As a Load Balance Policy select Fail Over Only. Then for each path select which is Active/Optimized and which is a Standby path. Also make active path Preferred, so that after failover it failbacks to it.

Don’t be confused by iSCSI on the figure. It’s the same for pure FC. It’s just for reference.

Check configuration

When you configure active and passive paths you assume that first path listed is to controller A and second path is to controller B. But, in fact, there is no indication of that from the configuration page and you can neither confirm nor deny it. The only ID you see is adapter ports but they don’t even map to the actual ports on HBAs.

To be able to check your configuration you need to install IBM SMdevices utility which comes with IBM DS Storage Manager. Run DS SM installation and go for Custom Installation. There you need to check only the Utilities part. In SMdevices output you can see which path is preferred for this LUN and if it’s configured as active (In Use):

C:\Program Files\IBM_DS\util>SMdevices
IBM System Storage DS Storage Manager Devices
. . .
\\.\PHYSICALDRIVE1 [Storage Subsystem ITSO5300, Logical
Drive 1, LUN 0, Logical Drive ID
<600a0b80002904de000005a246d82e17>, Preferred Path
(Controller-A): In Use]

References

The best reference I found on that topic is IBM Midrange System Storage Hardware Guide (SG24-7676-01), from p.453: DS5000 logical drive representation in Windows Server 2008. As well as Installing and Configuring MPIO guide from Microsoft.