Posts Tagged ‘target’

Masking a VMware LUN

February 7, 2016

maskingA month ago I passed my VCAP-DCA exam, which I blogged about in this post. And one of the DCA exam topics in the blueprint was LUN masking using PSA-related commands.

Being honest, I can hardly imagine a use case for this as LUN masking is always done on the storage array side. I’ve never seen LUN masking done on the hypervisor side before.

If you have a use case for host LUN masking leave me a comment below. I’d be curious to know. But regardless of its usefulness it’s in the exam, so we have to study it, right? So let’s get to it.

Overview

There are many blog posts on the Internet on how to do VMware LUN Masking, but only a few explain what is the exact behaviour after you type each of the commands and how to fix the issues, which you can potentially run into.

VMware uses Pluggable Storage Architecture (PSA) to claim devices on ESXi hosts. All hosts have one plug-in installed by default called Native Multipathing Plug-in (NMP) which claims all devices. Masking of a LUN is done by unclaiming it from NMP and claiming using a special plug-in called MASK_PATH.

Namespace “esxcli storage core claimrule add” is used to add new claim rules. The namespace accepts multiple ways of addressing a device. Most widely used are:

  • By device ID:
    • -t device -d naa.600601604550250018ea2d38073cdf11
  • By location:
    • -t location -A vmhba33 -C 0 -T 0 -L 2
  • By target:
    • -t target  -R iscsi -i iqn.2011-03.example.org.istgt:iscsi1 -L 0
    • -t target -R fc –wwnn 50:06:01:60:ba:60:11:53 –wwpn 50:06:01:60:3a:60:11:53 (use double dash for wwnn and wwpn flags, WordPress strips them off)

To determine device names use the following command:

# esxcli storage core device list

To determine iSCSI device targets:

# esxcli iscsi session list

To determine FC paths, WWNNs and WWPNs:

# esxcli storage core path list

Mask an iSCSI LUN

Let’s take iSCSI as an example. To mask an iSCSI LUN add a new claim rule using MASK_PATH plug-in and addressing by target (for FC use an FC target instead):

# esxcli storage core claimrule add -r 102 -t target -R iscsi -i iqn.2011-03.example.org.istgt:iscsi1 -L 0 -P MASK_PATH

Once the rule is added you MUST load it otherwise the rule will not work:

# esxcli storage core claimrule load

Now list the rules and make sure there is a “runtime” and a “file” rule. Without the file rule masking will not take effect:

claimrule

The last step is to unclaim the device from the NMP plug-in which currently owns it and apply the new set of rules:

# esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0
# esxcli storage core claiming unclaim -t location -A vmhba33 -C 1 -T 0 -L 0
# esxcli storage core claimrule run

You can list devices connected to the host to confirm that the masked device is no longer in the list:

# esxcli storage core device list

Remove maskig

To remove masking, unclaim the device from MASK_PATH plug-in, delete the masking rule and reload/re-run the rule set:

# esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 0 -L 0
# esxcli storage core claiming unclaim -t location -A vmhba33 -C 1 -T 0 -L 0
# esxcli storage core claimrule remove -r 102
# esxcli storage core claimrule load
# esxcli storage core claimrule run

Sometimes you need to reboot the host for the device to reappear.

Conclusion

Make sure to always mask all targets/paths to the LUN, which is true for iSCSI as well as FC, as both support multipathing. You have a choice of masking by location, target and path (masking by device is not supported).

For a FC LUN, for instance, you may choose to mask the LUN by location. If you have two single port FC adapters in each host, you will typically be masking four paths per LUN.  To accomplish that specify adapters using flag -A and LUN ID using flags -C, -T and -L.

Hope that helps you to tick off this exam topic from the blueprint.

Advertisement

Dell Compellent iSCSI Configuration

November 20, 2015

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References

Zoning vs. LUN masking explained

September 28, 2012

Zoning and masking terms are often confused by those who just started working with SAN. But it takes 5 minutes googling to understand that the main difference is that zoning is configured on a SAN switch on a port basis (or WWN) and masking is a storage feature with LUN granularity. All modern hardware supports zoning and masking. Given that, the much more interesting question here is what’s the point of zoning if there is masking with finer granularity.

Both security features do the same thing, restrict access to particular storage targets. And it seems that there is no point in configuring both of them. But that’s not true. One, not that convincing argument, is that in case one of the features is accidentally misconfigured, you still maintain security. But the much bigger issue in no-zoning configuration are RSCNs. RSCNs are Registered State Change Notification messages which are issued by SAN Name Server service when fabric changes it’s configuration (new device has been added to the fabric, a zone has changed, a switch name or IP address has changed, etc). RSCNs can be disruptive to fabric operation. And if you don’t have zones RSCNs are flooded to everyone each time something changes in a fabric, even if it has nothing to do with majority of devices. So zoning is a SAN best practice and its configuration is highly recommended.

In fact, Brocade recommends to adopt a so called Single Initiator Zoning (SIZ) practice, when one host pWWN (initiator) is zoned to one or more storage pWWNs. It reduces RSCN issue to a minimum.

As a best reference read Brocade’s: Secure SAN Zoning – Best Practices.