Posts Tagged ‘RPO’

RecoverPoint VE: iSCSI Network Design

March 29, 2016

recoverpointRecoverPoint is a great storage replication product, which supports Continuous Data Protection (CDP) and gives you RPO figures measured in second compared to a standard asynchronous storage-based replication solutions, where RPO is measured in minutes or even hours.

RecoverPoint comes in three flavours:

  • RecoverPoint SE/EX/CL – physical appliance for replication between VNX (RecoverPoint/SE), VNX/VMAX/VPLEX (RecoverPoint/EX) or EMC and non-EMC (RecoverPoint CL) storage arrays.
  • RecoverPoint VE – virtual edition of RecoverPoint which is installed as a VM and supports the same SE/EX/CL versions.
  • RecoverPoint for Virtual Machines – also a virtual appliance but is array-agnostic and works at a hypervisor level by replicating VMs instead of LUNs.

In this blog post we will be discussing connectivity options for RecoverPoint VE (SE edition). Make sure to not confuse RecoverPoint VE and RecoverPoint for Virtual Machines as it’s two completely different products.

VNX MirrorView ports

MirrorView is an another EMC replication solution integrated into VNX arrays. If there’s a MirrorView enabler installed, it will claim itself the first FC port and the first iSCSI port. When patching VNX iSCSI ports make sure to NOT use the ports claimed by MirrorView.

mirrorview_ports

If you use 1GbE (4-port) I/O modules you can use three ports per SP (all except port 0) and if you have 10GbE (2-port) I/O modules you can use one port per SP. I will talk about workarounds for this in the next blog post.

RPA appliance iSCSI vNICs

Each RecoverPoint appliance has two iSCSI NICs, which can be configured on either one or two subnets. If you use one 10Gb port on each SP as in the example above, then you’re forced to use one subnet. Because you obviously need at least two ports on each SP to have two networks.

If you have 1Gb modules in your VNX array, then you will most likely have two 1Gb iSCSI ports connected on each SP. In that case you can use two iSCSI subnets to reduce the number of iSCSI sessions between RPAs and a VNX.

On the vSphere side you will need to create one or two iSCSI port groups, depending on how many subnets you’ve decided to allocate and connect RPA vNICs accordingly.

rpa_iscsi

VNX iSCSI Connections

RecoverPoint clusters are deployed and connected using a special tool called Deployment Manager. It assigns all IP addresses, connects RecoverPoint clusters to VNX arrays and joins sites together.

Once deployment is finished you will have iSCSI connections created on the VNX array. Depending on how many iSCSI subnets you’re using, iSCSI connections will be configured accordingly.

1. One Subnet Example

Lets look at the one subnet topology first. In this example you have one 10Gb port per VNX SP and two ports on each of the two RPAs all on one subnet. When you right click on the storage array in Unisphere and select iSCSI > Connections Between Storage Systems you should see something similar to this.

iscsi_connections

As you can see ports iSCSI1 and iSCSI2 on RPA0 and RPA1 are mapped to two ports on the storage array A-5 and B-5. Four RPA ports are connected to two VNX ports which gives you eight iSCSI initiator records on the VNX.

iscsi_initiators

2. Two Subnets Example

If you connect two 1Gb ports per VNX SP and decide to use two subnets, then each SP will have one port on each of the two subnets. Same goes for the RPAs. Each RPA will have one vNIC connected to each subnet.

iSCSI connections will be set up a little bit differently now. Because only the VNX and RPA ports which are on the same subnet should be able to talk to each other.

iscsi_connections2

Every RPA in this example has one IP on the xxx.xxx.46.0/255.255.255.192 subnet (iSCSI A) and one IP on the xxx.xxx.46.64/255.255.255.192 subnet (iSCSI B). Similarly, ports A-10 and B-10 on the VNX are configured on iSCSI A subnet. And ports A-11 and B-11 are configured on iSCSI B subnet. Because of that, iSCSI1 ports are mapped to ports A-10/B-10 and iSCSI2 ports are mapped to ports A-11/B-11.

As we are using two subnets in this example instead of 4 RPA ports by 4 VNX ports = 16 iSCSI connections, we will have 2 RPA ports by 2 VNX ports (subnet iSCSI A) + 2 RPA ports by 2 VNX ports (subnet iSCSI B) = 8 iSCSI connections.

iscsi_initiators2

Conclusion

The goal of this post was to discuss the points which are not very well explained in RecoverPoint documentation. It’s not a comprehensive guide by any means. You can find the full deployment procedure with prerequisites, installation and configuration steps in EMC RecoverPoint Installation and Deployment Guide.

Advertisement

ESXi Host Maintenance with Zerto

February 1, 2016

zerto2Zerto replication is quite easy to configure. Once you have a Zerto Virtual Manager (ZVM) and Virtual Replication Adaptors (VRA) up and running at both sites, you can start adding your virtual machines to replication. There is, however, one question which comes up a lot from the operations point of view. What if you have replication going between the sites and you need to put one of your ESXi hosts into maintenance mode, would that break the replication? The answer is as always – it depends.

Source Site

In Zerto you typically have VRAs installed on each of the hosts at both sites and traffic going one way – from Production data centre to DR. Now, if you want to do maintenance on one of the hosts where VMs are being replicated FROM (Production site) then all you need to do is vMotion VMs to the remaining hosts. Zerto fully supports vMotion and the process is seamless. When VMs are moved to other hosts, VRAs on these hosts automatically pick them up and replication continues without user’s intervention.

Destination Site

If you want to do maintenance on one of the hosts where VMs are being replicated TO (DR site), then this is where you need to be more careful. VMs replicated by Zerto are not shown in vCenter inventory and obviously can’t be moved using conventional vMotion method. This is done from ZVM’s GUI.

zerto_vra

In ZVM find the host you want to put into maintenance mode on the Setup tab and in the More drop-down menu select Change VM Recovery VRA. Select the replacement host where you want to redirect VM replication to and click Save. What this option does in Zerto is somewhat similar to what vMotion does in vSphere – it migrates VMs between VRAs.

Once you hit the button, VMs’ RPO will start to grow until the migration is finished. In my case for 12 VMs the process took about 5 minutes to complete. If you have dozens of protected VMs on each of the VRAs, it may take significantly longer. If it’s a concern, you may want to allocate a maintenance windows for this activity.

zerto_rpo

You will also get a warning that the migration will result in a bitmap-sync. Bitmap Sync tracks the changed blocks on a VM when replication to the destination VRA is interrupted. The amount of changed data over a 5 minute period should be reasonably small. And in my experience VMs get back in sync after a migration very quickly.

When all replicated VMs are moved to another recovery host, you can vMotion out any VMs you may have running on the host, shut down the VRA and put the host into maintenance mode to carry out the maintenance activities.

Once that’s finished, just do the reverse. Disable maintenance mode on the host, boot up the VRA and move back the migrated VMs. In the Change VM Recovery VRA dialogue you can select a completely different set of VMs to move back. As long as you keep them balanced between all VRAs in your cluster you should be good.