Posts Tagged ‘module’

RecoverPoint VE: Common Deployment Issues

April 19, 2016

fixIn one of my previous posts I discussed iSCSI connectivity considerations when deploying RecoverPoint VE. In this post I want to describe common issues you may encounter when deploying RecoverPoint clusters, most of which are applicable to both physical appliance and virtual editions.

VNX MirrorView ports

I already touched on that briefly in my previous post. But it’s worth mentioning again that you can NOT use MirrorView ports for iSCSI connectivity between RPAs and VNX arrays. When you try to use a MirrorView iSCSI port for RecoverPoint, it gets upset and doesn’t communicate with the array.

If you make a mistake of connecting one port per SP and this port is a MirrorView port, you will have no communication with the array at all and get the following error in Unisphere for RecoverPoint:

Error Splitter ARRAYNAME-A is down
Error Splitter ARRAYNAME-B is down

splitter_error

If you connect two ports per SP, one of which is MirrorView port and use two iSCSI network subnets you may get the following error when running a SAN connectivity test from the RPA boxmgmt interface. In this case RPA can communicate with the array only over one subnet:

On array ABCD1234567890, all paths for device with UID=0x1234567890abcdef go through RPA Ethernet port eth2 …

multipathing_issue

The solution is as simple as moving the link from port 0 to port 1 on a 10Gb I/O module. And from port 0 to port 1,2 or 3 on a 1Gb I/O module.

If you don’t want to lose two iSCSI ports (1 per SP), especially if it’s 10Gb, and you’re not using MirrorView, you can uninstall MirrorView enabler from the array. Just keep in mind that it will require an array reboot. Service processors will be rebooted one at a time, so there is no downtime. But if it’s a heavily used storage array it’s recommended to schedule uninstallation out of hours to minimize the impact.

Error when redeploying a cluster

If you’ve made configuration mistakes while deploying a RecoverPoint cluster and want to blow the whole thing away and redeploy it from scratch you may encounter the following error when deploying for the second time:

VNX path set with IP 10.10.10.1 already exists in a different path set (RP_0x123abc456def789g_0_iSCSI1)

rpa_redeploy

The cause of the issue is iSCSI sessions which stayed on the VNX after you deleted RPA VMs. You need to connect to the VNX and delete them in Unisphere manually by right-clicking on the storage array name on the dashboard and selecting iSCSI > Connections Between Storage Systems. This is what duplicate sessions look like:

duplicate_rp

As you can see there’re three sets of RecoverPoint cluster iSCSI connections after three unsuccessful attempts.

You will need to delete old sessions before you are able to proceed with the deployment in RecoverPoint Deployment Manager.

Wrong initiator names

I’ve seen this on multiple occasions when RecoverPoint registers initiators on VNX with inconsistent hostnames.

As you’ve seen on the screenshots above, hostname field of every initiator consists of the cluster ID and RPA ID (not sure what the third field means), such as this:

RP_0x123abc456def789g_1_0

In this example you can see that RPA1 has two hostnames with suffixes _0_0 and _1_0.

wrong_initiators

This issue is purely cosmetic and doesn’t affect RecoverPoint operation, but if you want to fix it you will need to restart Management Servers on VNX service processors. It’s a non-disruptive procedure and can be performed by opening the following link http://SP_IP/setup and clicking on “Restart Management Server” button.

After a restart, array will update hostnames to reflect the actual configuration.

Joining two clusters with the licences already applied

This is just not going to work. Make sure to join production and DR clusters before applying RecoverPoint licences or Deployment Manager “Connect Cluster” wizard will fail.

It’s one of the prerequisites specified in RecoverPoint “Installation and Deployment Guide”:

If you plan to connect the new cluster immediately after preparing it for connection,
ensure:

  • You do not install a license in, or modify the settings of, the new cluster before
    connecting it to the existing system.

Conclusion

There’re always much more things that can potentially go wrong. But if any of the above helped you to solve your RecoverPoint deployment issues make sure to let me know in the comments below!

Advertisement

Force10 MXL Switch: Port Numbering

February 26, 2015

This is a quick cheat sheet fro MXL port numbering schema, which might seem a bit confusing if you see a MXL switch for the first time.

force10_mxl_10-40gbe_dsc0666

Above is the picture of the switches that I’ve worked with. On the right we have a 2-Port 40GbE built-in module. And then there’re two expansion slots – slot 0 in the middle and slot 1 on the left. Each module has 8 ports allocated to it. The reason being that you can have 2-Port 40-GbE QSFP+ modules in each of the slots, which can operate in 8x10GbE mode. You will need QSFP+ to 4xSFP+ breakout cables, but that’s not the most common scenario anyway.

As we have 8 ports per slot, it would look something like this:

mxl-external-port-mappings

This picture is more for switch stacking, but the rightmost section should give you a basic idea. One of the typical MXL configurations is when you have a built-in 40GbE module for stacking and one or two 4-Port SFP+ expansion modules in slots 0 and 1. In that case your port numbers will be: 33 and 37 for 40GbE ports, 41 to 44 in expansion slot 0 and 49 to 52 in expansion slot 1.

11-01-05-hybrid-qsfp-plus4-port-SFP-module

As you can see for QSFP+ module switch breaks 8 ports in two sets of 4 ports and picks the first number in each set for 40GbE ports. And for SFP+ modules it uses consecutive numbers within each slot and then has a 4 port gap.

Port numbering is described in more detail in MXL’s switch configuration guide, which you can use for your reference. But this short note might help someone to quickly knock that off instead of browsing through a 1000 page document.

Also, I’ve seen pictures of MXL switches with a slightly different port numbering: 41 to 48 in slot 0 and 33 to 40 in slot 1. Which seems like a mirrored version of the switch with a built-in module on the opposite side of it. I’m not sure if it’s just an older version of the same switch, but keep in mind that you might actually have the other variation of the MXL in your blade chassis.