Dell Compellent iSCSI Configuration

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References

Advertisement

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

25 Responses to “Dell Compellent iSCSI Configuration”

  1. Hoang Anh Says:

    Hi,

    Thank for the guide.
    I just finished my setup (Compellent iSCSI) yesterday. With your blog, I know my setup is OK.

    Regards,

  2. Random Short Take #2 | penguinpunk.net Says:

    […] This is a great article on configuring iSCSI for the Dell Compellent; […]

  3. Xeiran Says:

    One fault domain was used here, but depending on how your switches behave in stack mode you might need to use two fault domains instead.

    For instance, Dell PowerEdge or Dell N-series switches in stack mode will all reboot at the same time when applying new firmware. The only way to prevent this (and prevent your hypervisor cluster from going down) is to use a LAG connection which allows for separate switch reboots.

    • niktips Says:

      Hi Xeiran, I assume you’re comparing stacking with MLAG? If so, I agree. MLAG has the benefit of non-disruptive upgrades. However, it adds some complexity around the switch management. As MLAG switches are managed separately.

      In regards to the fault domains configuration, as per Dell best practices, you should have one fault domain per network subnet: “A separate fault domain must be created for each front-end Ethernet network”. Which means that if you have only one Ethernet network configured for iSCSI, you will have one fault domain regardless of whether your switches are configured in a stack or a MLAG.

      Hope that makes sense.

  4. Yani Says:

    I would still use 2 fault domains regardless of what switch topology. The fault domains provide separation and control which CML virtual ports fail to which physical ports. This separation ensures that failures such as port flapping doesn’t inadvertently affect all ports tied to one fault domain/subnet.Or any other Layer 1/2 issues you may have on the network. The rule of 2 of everything for redundancy applies to things such as PSU, controllers and ports, the same principle should to apply to the fault domains.

  5. Steph Says:

    Hi Niktips,

    So I feel I am going to re-configure our setup according to your recommendations, I am having so many issues, setup as per below:

    1 x FX2s with 4 x FC630’s and 2 x FN410s IOA

    Compellent SC4020 directly connected to the FN410s, according to the Dell Doc (Top-Bottom, Top-Bottom)

    I think the root cause of my issue(thanks to your blog!), is the fact that the deployment engineer configured only one fault domain, but the FN410s are acting independent(I suspect need to confirm with ProSupport), so basically when I attempt to perform any sort of action on the presented LUN’s in Windows, it gives an error in disk management(I can put the LUN to online), randomly drops paths to the storage device for the selected FC(after attempting to format the disks), and after a reboot I cannot rdp to the host.

    My strengths are more on the software side of things(Hyper-V, Clustering), but logic tells me the deployment was not done properly from the start.

    Any ideas?, this issue is really getting to me as it is the first time in 4yrs at my current job where I convinced the boss to buy anything other than R- series cluster hosts.

    Thanks in advance for the help!

    • niktips Says:

      Hi Steph, this is unlikely to be a hardware issue. I’ve deployed FX2s before. They’re a good piece of gear.

      Network topology/configuration may be a problem here. If your FN410s are not stacked or in a VLT, then start by reconfiguring iSCSI to two subnets / fault domains. Make sure the first subnet is connected to IOA and second to IOB.

  6. iSCSI Setup on Dell Compellent Array Says:

    […] using the Dell Compellent for iSCSI connections became a necessity.There’s good blog post here that runs through most of this. I’ve added some things from my own […]

  7. SamB Says:

    How to configure Control Ports on a Compellent EN-SC4020?
    Front end is currently configured to use 2 Fault Domains/2 subnets.
    I mean which ports on which controller to physically wire to which switch?
    Thanks

  8. vishwanath Says:

    Hi,

    Yep, there are very very few articles on Dell compellent iscsi configuration.

    I am very new to storage.

    I’ve joined a new office, here there are two dell compellent controllers 20 series, which were down for more than 6 months after the company got split up.
    When I powered on the controllers, it does not boot properly, not sure what exactly is happening. I have connected the monitor it shows black screen with dots ………, I have checked the server inventory sheet and found the ip address of the controller and when i browsed using the IP address it shows “System is Starting Up” and does not change.

    I have rebooted the server and saw ISCSI and Fiber channel rom. Fiber channel utility shows storage array disks.

    how do I configure the iscsi (qlogic) target and initiator in it. When I press Ctrl+q i can access the iscsi configuration with two adapters not sure how to configure it, can you please guide me. Is there something wrong with the server which is not booting properly. If I could access the Storage controller login through web browser it would have been very easy to config.

    Thanks
    Vishwanath

    • niktips Says:

      Hi Vishwanath, if the array is not booting you will need to fix that first. It’ll be hard without involving Dell Compellent technical support.

      • vishwanath Says:

        Hi Nik, The array is good, I can see all hard drives in it.
        The bios shows 256GB Flash drive in each controller. I believe Storage OS is in the Flash Drive. Correct me if i am wrong.

      • niktips Says:

        Hi vishwanath, if your array is not booting, I would highly encourage you to contact Dell EMC technical support. They would be the best resource to assist you.

  9. Gogu81 Says:

    Hello. When I click “Edit fault domain” there is no “IP Settings” button. How can one set the IP address for the iSCSI control port? I’m stuck please help me. Thank you.

    • niktips Says:

      When you click on Edit Fault Domains button, do you seen any fault domains in the list? If no, you will need to create one or two, depending on whether you want to implement one or two subnet topology.

  10. rihatums Says:

    Hi Niktips, thank you for your blog. can we assign multiple vlans to a fault domain? i have two fault domains with 2 x 10gbe each. i want to assign multiple vlans to each fault domain – is that possible?

    • niktips Says:

      Hi, rihatums. Compellent supports VLAN tagging, but I don’t believe you can have more than one VLAN on one port. However, depending on your Compellent model, you can install more than one iSCSI card per controller and achieve iSCSI traffic separation on physical port basis.

  11. Jitendra Says:

    Hi…

    we have compellent with hosts running CentOS6.4. recently we migrated to CentOS 7.3 but after migration we see only 4 paths to each LUNs even though we have 8 ports on storage (4 on each controller). before migration each LUNs were visible via 16 paths.

    Thanks,
    Jitendra

    • niktips Says:

      Hi, Jitendra. Compellent is not an ALUA storage array (read my other blog post), which means each LUN is presented through one controller only. Depending on whether you use one or two subnets in your iSCSI network setup, you should see either eight (one subnet) or four paths (two subnets) per LUN.

  12. Aslam Says:

    Hi Nik, I have a question around physical connections from each hypervisor to SAN Switches and to Sc40 (iscsi storage). Our storage is configured in 2 fault domain 2 subnets mode and there the san switches are stacked acting as one switch (8 switches stacked). from the hypervisors we have 4 physical links connected to san switches and we are puzzled with this config. I have been trying to get the design from Dell however they are not able come up with concrete answer, I hope you can help me figure this out here.

    We are using SC40 compellent dual controller – 2 fault domain, and around 60 hypervisors connected to it.
    So each hypervisor has 2 dual port HBAs and connected to san switches, the bonding is done for the each port from HBA for HA, however i am not sure why do we need 4 physical ports, can we not achieve the HA with 2 single port HBAs?

    Hope you can help me with this questions..

    • niktips Says:

      Hi Aslam, that’s correct, you can achieve redundancy with as little as two single port HBAs per server. The only other consideration is the path storage traffic will take, which depends on which switches storage and server ports are connected to. If initiator and target are not on the same switch, traffic will traverse inter-switch links. Which sometimes may not be desirable if ISLs are oversubscribed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: