Posts Tagged ‘dell’

Dell Compellent iSCSI Configuration

November 20, 2015

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References

Advertisement

VNX/VNXe array negotiates FC port as L-Port

March 23, 2015

Hit an issue today where VNXe array FC ports negotiate to L-port instead of F-port when Fill Word is set to Mode 3 (ARB/ARB then IDLE/ARB). Result – loss of connectivity on the affected link.

vnx_lport

Recommended FC Fill Word for VNX/VNXe arrays is Mode 3. Generally it’s a good idea to set them according to best practice as part of each installation. Apparently, when changing Fill Word from legacy Mode 0 (IDLE/IDLE) to Mode 3 (ARB/ARB then IDLE/ARB) array might negotiate as L-port and FC path goes down.

Solution is to statically configure port as F-port in port settings.

vnx_lport_sol

Environment:

  • Dell M5424 8Gb Fibre Channel Switch: Brocade FOS v7.2.1b
  • EMC VNXe 3200: Block OE v3.1.1.4993502

Force10 MXL Switch: Stacking

March 3, 2015

Overview

There are two typical scenarios for stacking MXL’s – within the chassis and across the chassis. In both cases it’s recommended to use ring topology. Daisy chaining is also supported, but not desirable because of the lack of redundancy.

In this post I will be describing the more common case, which is intra-stacking. For inter-stacking configuration you can refer to Dell or Force10 documentation.

Cabling

dell_chassis

In my case I have four MXL switches in bays A1, B1, B2, A2. Cabling is simple, you basically daisy chain all switches and then plug the last switch to the first one.

Stack roles and unit numbers

When stack is built each switch is assigned an ID starting 0 and a role in the stack. There are three roles: Master, Standby and Member:

  • Master – is the switch you’ll use for all configuration. If you currently have IPs assigned to all your MXL switches, all of them except for one will be reset and only the Master will be accessible via SSH.
  • Standby – is the switch which takes over if Master switch fails. Master switch IP address is transferred to Standby in a failover scenario and stack continues to be managed via the same IP.
  • Member switch provides port capacity and doesn’t play any additional roles in the stack.

When you plug cables in, assign stack ports and restart the switches, they will go through election process and automatically pick up roles, as well as IDs. There’s an algorithm that assigns stack IDs and roles, which switches follow. But this algorithm has nothing to do with interconnect bay IDs in the chassis or order in which you cable the switches. You end up with pretty much random numbering.

If order matters, then you’ll have to reboot switches one by one in a particular order to have the desired IDs assigned. In that case IDs are assigned sequentially in a controlled fashion.

Stack configuration

If you don’t have any additional 40GbE modules in slots 0 and 1, then you’ll end up with two QSFP+ ports in a built-in module – ports 33 and 37 (refer to my Force10 MXL Switch: Port Numbering post for port numbering details). All you need to do is to designate them as stack ports on all switches, save config and reboot.

# stack-unit 0 stack-group 0
# stack-unit 0 stack-group 1
# copy run start
# reload

By default each switch is unit 0 in its own stack and stack-group is basically just a 40GbE stack port. You can have maximum of six such ports numbered from 0 to 5. To check that stack ports have been enabled run:

# do show system stack-unit 0 stack-group configured

enabled_ports

It could be that your 40GbE ports are in quad 10GbE mode and are not shown. You’ll need to convert them back to 40GbE mode to proceed. To show the list of available ports type in the command below. Switch shows empty expansion slots as stack ports as well (port 0/41 and 0/45), which is a bit confusing.

# show system stack-unit 0 stack-group

port_list

After a reboot, switches will join the stack and get a role and an id. This process is automatic by default. To see if stack ports have come up after a reboot type:

# show system stack-port status

stack_up

Conclusion

In my example I let switches to go through election process and select roles and IDs on their own. If you want to control the assignment process refer to Dell and Force10 documentation for instructions.

Now you may wonder if unit IDs are assigned automatically, how do you know which stack unit corresponds to which chassis bay ID. The hint for that is to show system inventory and map them by the Service Tag ID which is also shown in the Chassis Management Controller:

# show system brief
# show inventory

Random DC pictures

January 19, 2012

Several pictures of server room hardware with no particular topic.

Click pictures to enlarge.

10kVA APC UPS.

UPS’s Network Management Card (NMC) (with temperature sensor) connected to LAN.

Here you can see battery extenders (white plugs). They allow UPS to support 5kVA of load for 30 mins.

Two Dell PowerEdge 1950 server with 8 cores and 16GB RAM each configured as VMware High Availability (HA) cluster.

Each server has 3 virtual LANs. Each virtual LAN has its own NIC which in its turn has multi-path connection to Cisco switch by two cables, 6 cables in total.

Two Cisco switches which maintain LAN connections for NetApp filers, Dell servers, Sun tape library and APC NMC card. Two switches are tied together by optic cable. Uplink is a 2Gb/s trunk.

HP rack with 9 HP ProLiant servers, HP autoloader and MSA 1500 storage.

HP autoloader with 8 cartridges.

HP MSA 1500 storage which is completely FC.

Hellova cables.