Posts Tagged ‘journal’

Zerto Overview

March 6, 2014

zerto-logoZerto is a VM replication product which works on a hypervisor level. In contrast to array level replication, which SRM has been using for a long time, it eliminates storage array from the equation and all the complexities which used to come along with it (SRAs, splitting the LUNs for replicated and non-replicated VMs, potential incompatibilities between the orchestrated components, etc).

Basic Operation

Zerto consists of two components: ZVM (Zerto Virtual Manger) and VRA (Virtual Replication Appliance). VRAs are VMs that need to be installed on each ESXi host within the vCenter environment (performed in automated fashion from within ZVM console). ZVM manages VRAs and all the replication settings and is installed one per vCenter. VRA mirrors protected VMs I/O operations to the recovery site. VMs are grouped in VPGs (Virtual Protection Groups), which can be used as a consistency group or just a container.

Protected VMs can be preseeded  to DR site. But what Zerto essentially does is it replicates VM disks to any datastore on recovery site where you point it to and then tracks changes in what is called a journal volume. Journal is created for each VM and is kept as a VMDK within the “ZeRTO volumes” folder on a target datastore. Every few seconds Zerto creates checkpoints on a journal, which serve as crash consistent recovery points. So you can recover to any point in time, with a few seconds granularity. You can set the journal length in hours, depending on how far you potentially would want to go back. It can be anywhere between 1 and 120 hours.Data-Replication-over-WAN

VMs are kept unregistered from vCenter on DR site and VM configuration data is kept in Zerto repository. Which essentially means that if an outage happens and something goes really wrong and Zerto fails to bring up VMs on DR site you will need to recreate VMs manually. But since VMDKs themselves are kept in original format you will still be able to attach them to VMs and power them on.

Failover Scenarios

There are four failover scenarios within Zerto:

  • Move Operation – VMs are shut down on production site, unregistered from inventory, powered on at DR site and protection is reversed if you decide to do so. If you choose not to reverse protection, VMs are completely removed from production site and VPG is marked as “Needs Configuration”. This scenario can be seen as a planned migration of VMs between the sites and needs both sites to be healthy and operational.
  • Failover Operation – is used in disaster scenario when production site might be unavailable. In this case Zerto brings up protected VMs on DR site, but it does not try to remove VMs from production site inventory and leave them as is. If production site is still accessible you can optionally select to shutdown VMs. You cannot automatically reverse protection in this scenario, VPG is marked as “Needs Configuration” and can be activated later. And when it is activated, Zerto does all the clean up operations on the former production site: shuts down VMs (if they haven’t been already), unregister them from inventory and move to VRA folder on the datastore.
  • Failover Test Operation – this is for failover testing and brings up VMs on DR site in a configured bubble network which is normally not uplinked to any physical network. VMs continue to run on both sites. Note that VMs disk files in this scenario are not moved to VMs folders (as in two previous scenarios) and are just connected from VRA VM folder. You would also notice that Zerto created second journal volume which is called “scratch” journal. Changes to the VM that is running on DR site are saved to this journal while it’s being tested.
  • Clone Operation – VMs are cloned on DR site and connected to network. VMs are not automatically powered on to prevent potential network conflicts. This can be used for instance in DR site testing, when you want to check actual networking connectivity, instead of connecting VMs to an isolated network. Or for implementing backups, cloned environment for applications testing, etc.

Zerto Journal Sizing

By default journal history is configured as 4 hours and journal size is unlimited. Depending on data change rate within the VM journal can be smaller or larger. 15GB is approximately enough storage to support a virtual machine with 1TB of storage, assuming a 10% change rate per day with four hours of journal history saved. Zerto has a Journal Sizing Tool which helps to size journals. You can create a separate journal datastore as well.

Zerto compared to VMware Replication and SRM

There are several replication products in the market from VMware. Standalone VMware replication, VMware replication + SRM orchestraion and SRM array-based replication. If you want to know more on how they compare to Zerto, you can read the articles mentioned in references below. One apparent Zerto advantage, which I want to mention here, is integration with vCloud Director, which is essential for cloud providers who offer DRaaS solutions. SRM has no vCloud Director support.

References

Advertisement

NetApp NVRAM and Write Caching

July 19, 2013

388375Overview

NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is forced to flush by a system timer.

To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. When data has been written to disks as part of so called Consistency Point (CP), write blocks which were cached in main memory become the first target to be evicted and replaced by other data.

Caching Approach

NetApp is frequently criticized for small amounts of write cache. For example FAS3140 has only 512MB of NVRAM, FAS3220 has a bit more 1,6GB. In mirrored HA or MetroCluster configurations NVRAM is mirrored via NVRAM interconnect adapter. Half of the NVRAM is used for local operations and another half for the partner’s. In this case the amount of write cache becomes even smaller. In FAS32xx series NVRAM has been integrated into main memory and is now called NVMEM. You can check the amount of NVRAM/NVMEM in your filer by running:

> sysconfig -a

The are two answers to the question why NetApp includes less cache in their controllers. The first one is given in white paper called “Optimizing Storage Performance and Cost with Intelligent Caching“. It states that NetApp uses different approach to write caching, compared to other vendors. Most often when data block comes in, cache is used to keep the 8KB data block, as well as 8KB inode and 8KB indirect block for large files. This way, write cache can be thought as part of the physical file system, because it mimics its structure. NetApp on the other hand uses journaling approach. When data block is received by the filer, 8KB data block is cached along with 120B header. Header contains all the information needed to replay the operation. After each cache flush Consistency Point (CP) is created, which is a special type of consistent file system snapshot. If controller fails, the only thing which needs to be done is reverting file system to the latest consistency point and replaying the log.

But this white paper was written in 2010. And cache journaling is not a feature unique to NetApp. Many vendors are now using it. The other answer, which makes more sense, was found on one of the toaster mailing list archives here: NVRAM weirdness (UNCLASSIFIED). I’ll just quote the answer:

The reason it’s so small compared to most arrays is because of WAFL. We don’t need that much NVRAM because when writes happen, ONTAP writes out single complete RAID stripes and calculates parity in memory. If there was a need to do lots of reads to regenerate parity, then we’d have to increase the NVRAM more to smooth out performance.

NVLOG Shipping

A feature called NVLOG shipping is an integral part of sync and semi-sync SnapMirror. NVLOG shipping is simply a transfer of NVRAM writes from the primary to a secondary storage system.  Writes on primary cannot be transferred directly to NVRAM of the secondary system, because in contrast to mirrored HA and MetroCluster, SnapMirror doesn’t have any hardware implementation of the NVRAM mirroring. That’s why the stream of data is firstly written to the special files on the volume’s parent aggregate on the secondary system and then are read to the NVRAM.

nvram

Documents I found useful:

WP-7107: Optimizing Storage Performance and Cost with Intelligent Caching

TR-3326: 7-Mode SnapMirror Sync and SnapMirror Semi-Sync Overview and Design Considerations

TR-3548: Best Practices for MetroCluster Design and Implementation

United States Patent 7730153: Efficient use of NVRAM during takeover in a node cluster