Posts Tagged ‘NMP’

Changing the Default PSP for Dell Compellent

April 26, 2016

dell_compellentIf you’ve ever worked with Dell Compellent storage arrays you may have noticed that when you initially connect it to a VMware ESXi host, by default VMware Native Multipathing Plugin (NMP) uses Fixed Path Selection Policy (PSP) for all connected LUNs. And if you have two ports on each of the controllers connected to your storage area network (be it iSCSI or FC), then you’re wasting half of your bandwidth.

compellent_psp

Why does that happen? Let’s dig deep into VMware’s Pluggable Storage Architecture (PSA) and see how it treats Compellent.

How Compellent is claimed by VMware NMP

If you are familiar with vSphere’s Pluggable Storage Architecture (PSA) and NMP (which is the only PSA plug-in that every ESXi host has installed by default), then you may know that historically it’s always had specific rules for such Asymmetric Logical Unit Access (ALUA) arrays as NetApp FAS and EMC VNX.

Run the following command on an ESXi host and you will see claim rules for NetApp and DGC devices (DGC is Data General Corporation, which built Clariion array that has been later re-branded as VNX by EMC):

# esxcli storage nmp satp rule list

Name              Vendor  Default PSP Description
----------------  ------- ----------- -------------------------------
VMW_SATP_ALUA_CX  DGC                 CLARiiON array in ALUA mode
VMW_SATP_ALUA     NETAPP  VMW_PSP_RR  NetApp arrays with ALUA support

This tells NMP to use Round-Robin Path Selection Policy (PSP) for these arrays, which is always preferable if you want to utilize all available active-optimized paths. You may have noticed that there’s no default PSP in the VNX claim rule, but if you look at the default PSP for the VMW_SATP_ALUA_CX Storage Array Type Plug-In (SATP), you’ll see that it’s also Round-Robin:

# esxcli storage nmp satp list

Name              Default PSP  
----------------- -----------
VMW_SATP_ALUA_CX  VMW_PSP_RR

There is, however, no default claim rule for Dell Compellent storage arrays. There are a handful of the following non array-specific “catch all” rules:

Name                 Transport  Claim Options Description
-------------------  ---------  ------------- -----------------------------------
VMW_SATP_ALUA                   tpgs_on       Any array with ALUA support
VMW_SATP_DEFAULT_AA  fc                       Fibre Channel Devices
VMW_SATP_DEFAULT_AA  fcoe                     Fibre Channel over Ethernet Devices
VMW_SATP_DEFAULT_AA  iscsi                    iSCSI Devices

As you can see, the default PSP for VMW_SATP_ALUA is Most Recently Used (MRU) and for VMW_SATP_DEFAULT_AA it’s VMW_PSP_FIXED:

Name                Default PSP   Description
------------------- ------------- ------------------------------------------
VMW_SATP_ALUA       VMW_PSP_MRU
VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays

Compellent is not an ALUA storage array and doesn’t have the tpgs_on option enabled. As a result it’s claimed by the VMW_SATP_DEFAULT_AA rule for the iSCSI transport, which is why you end up with the Fixed path selection policy for all LUNs by default.

Changing the default PSP

Now let’s see how we can change the PSP from Fixed to Round Robin. First thing you have to do before even attempting to change the PSP is to check VMware Compatibility List to make sure that the round robin PSP is supported for a particular array and vSphere combination.

vmware_hcl

As you can see, round robin path selection policy is supported for Dell Compellent storage arrays in vSphere 6.0u2. So let’s change it to get the benefit of being able to simultaneously use all paths to Compellent controllers.

For Compellent firmware versions 6.5 and earlier use the following command to change the default PSP:

# esxcli storage nmp satp set -P VMW_PSP_RR -s VMW_SATP_DEFAULT_AA

Note: technically here you’re changing PSP not specifically for the Compellent storage array, but for any array which is claimed by VMW_SATP_DEFAULT_AA and which also doesn’t have an individual SATP rule with PSP set. Make sure that this is not the case or you may accidentally change PSP for some other array you may have in your environment.

The above will change PSP for any newly provisioned and connected LUNs. For any existing LUNs you can change PSP either manually in each LUN’s properties or run the following command in PowerCLI:

# Get-Cluster ClusterNameHere | Get-VMHost | Get-ScsiLun | where {$_.Vendor -eq
“COMPELNT” –and $_.Multipathpolicy -eq “Fixed”} | Set-ScsiLun -Multipathpolicy
RoundRobin

This is what you should see in LUN properties as a result:

compellent_psp_2

Conclusion

By default any LUN connected from a Dell Compellent storage array is claimed by NMP using Fixed path selection policy. You can change it to Round Robin using the above two simple commands to make sure you utilize all storage paths available to ESXi hosts.

Advertisement

Masking a VMware LUN

February 7, 2016

maskingA month ago I passed my VCAP-DCA exam, which I blogged about in this post. And one of the DCA exam topics in the blueprint was LUN masking using PSA-related commands.

Being honest, I can hardly imagine a use case for this as LUN masking is always done on the storage array side. I’ve never seen LUN masking done on the hypervisor side before.

If you have a use case for host LUN masking leave me a comment below. I’d be curious to know. But regardless of its usefulness it’s in the exam, so we have to study it, right? So let’s get to it.

Overview

There are many blog posts on the Internet on how to do VMware LUN Masking, but only a few explain what is the exact behaviour after you type each of the commands and how to fix the issues, which you can potentially run into.

VMware uses Pluggable Storage Architecture (PSA) to claim devices on ESXi hosts. All hosts have one plug-in installed by default called Native Multipathing Plug-in (NMP) which claims all devices. Masking of a LUN is done by unclaiming it from NMP and claiming using a special plug-in called MASK_PATH.

Namespace “esxcli storage core claimrule add” is used to add new claim rules. The namespace accepts multiple ways of addressing a device. Most widely used are:

  • By device ID:
    • -t device -d naa.600601604550250018ea2d38073cdf11
  • By location:
    • -t location -A vmhba33 -C 0 -T 0 -L 2
  • By target:
    • -t target  -R iscsi -i iqn.2011-03.example.org.istgt:iscsi1 -L 0
    • -t target -R fc –wwnn 50:06:01:60:ba:60:11:53 –wwpn 50:06:01:60:3a:60:11:53 (use double dash for wwnn and wwpn flags, WordPress strips them off)

To determine device names use the following command:

# esxcli storage core device list

To determine iSCSI device targets:

# esxcli iscsi session list

To determine FC paths, WWNNs and WWPNs:

# esxcli storage core path list

Mask an iSCSI LUN

Let’s take iSCSI as an example. To mask an iSCSI LUN add a new claim rule using MASK_PATH plug-in and addressing by target (for FC use an FC target instead):

# esxcli storage core claimrule add -r 102 -t target -R iscsi -i iqn.2011-03.example.org.istgt:iscsi1 -L 0 -P MASK_PATH

Once the rule is added you MUST load it otherwise the rule will not work:

# esxcli storage core claimrule load

Now list the rules and make sure there is a “runtime” and a “file” rule. Without the file rule masking will not take effect:

claimrule

The last step is to unclaim the device from the NMP plug-in which currently owns it and apply the new set of rules:

# esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0
# esxcli storage core claiming unclaim -t location -A vmhba33 -C 1 -T 0 -L 0
# esxcli storage core claimrule run

You can list devices connected to the host to confirm that the masked device is no longer in the list:

# esxcli storage core device list

Remove maskig

To remove masking, unclaim the device from MASK_PATH plug-in, delete the masking rule and reload/re-run the rule set:

# esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0
# esxcli storage core claiming unclaim -t location -A vmhba33 -C 1 -T 0 -L 0
# esxcli storage core claimrule remove -r 102
# esxcli storage core claimrule load
# esxcli storage core claimrule run

Sometimes you need to reboot the host for the device to reappear.

Conclusion

Make sure to always mask all targets/paths to the LUN, which is true for iSCSI as well as FC, as both support multipathing. You have a choice of masking by location, target and path (masking by device is not supported).

For a FC LUN, for instance, you may choose to mask the LUN by location. If you have two single port FC adapters in each host, you will typically be masking four paths per LUN.  To accomplish that specify adapters using flag -A and LUN ID using flags -C, -T and -L.

Hope that helps you to tick off this exam topic from the blueprint.