Posts Tagged ‘snapmirror’
August 9, 2013
NetApp has quite a bit of features related to replication and clustering:
- HA pairs (including mirrored HA pairs)
- Aggregate mirroring with SyncMirror
- MetroCluster (Fabric and Stretched)
- SnapMirror (Sync, Semi-Sync, Async)
It’s easy to get lost here. So lets try to understand what goes where.

SnapMirror
SnapMirror is a volume level replication, which normally works over IP network (SnapMirror can work over FC but only with FC-VI cards and it is not widely used).
Asynchronous version of SnapMirror replicates data according to schedule. SnapMiror Sync uses NVLOGM shipping (described briefly in my previous post) to synchronously replicate data between two storage systems. SnapMirror Semi-Sync is in between and synchronizes writes on Consistency Point (CP) level.
SnapMirror provides protection from data corruption inside a volume. But with SnapMirror you don’t have automatic failover of any sort. You need to break SnapMirror relationship and present data to clients manually. Then resynchronize volumes when problem is fixed.
SyncMirror
SyncMirror mirror aggregates and work on a RAID level. You can configure mirroring between two shelves of the same system and prevent an outage in case of a shelf failure.
SyncMirror uses a concept of plexes to describe mirrored copies of data. You have two plexes: plex0 and plex1. Each plex consists of disks from a separate pool: pool0 or pool1. Disks are assigned to pools depending on cabling. Disks in each of the pools must be in separate shelves to ensure high availability. Once shelves are cabled, you enable SyncMiror and create a mirrored aggregate using the following syntax:
> aggr create aggr_name -m -d disk-list -d disk-list
HA Pair
HA Pair is basically two controllers which both have connection to their own and partner shelves. When one of the controllers fails, the other one takes over. It’s called Cluster Failover (CFO). Controller NVRAMs are mirrored over NVRAM interconnect link. So even the data which hasn’t been committed to disks isn’t lost.
MetroCluster
MetroCluster provides failover on a storage system level. It uses the same SyncMirror feature beneath it to mirror data between two storage systems (instead of two shelves of the same system as in pure SyncMirror implementation). Now even if a storage controller fails together with all of its storage, you are safe. The other system takes over and continues to service requests.
HA Pair can’t failover when disk shelf fails, because partner doesn’t have a copy to service requests from.
Mirrored HA Pair
You can think of a Mirrored HA Pair as HA Pair with SyncMirror between the systems. You can implement almost the same configuration on HA pair with SyncMirror inside (not between) the system. Because the odds of the whole storage system (controller + shelves) going down is highly unlike. But it can give you more peace of mind if it’s mirrored between two system.
It cannot failover like MetroCluster, when one of the storage systems goes down. The whole process is manual. The reasonable question here is why it cannot failover if it has a copy of all the data? Because MetroCluster is a separate functionality, which performs all the checks and carry out a cutover to a mirror. It’s called Cluster Failover on Disaster (CFOD). SyncMirror is only a mirroring facility and doesn’t even know that cluster exists.
Further Reading
Tags:aggregate, Async, asynchronous, cabling, CFO, CFOD, cluster, Consistency Point, controller, CP, cutover, fabric, failover, failure, FC, FC-VI, fiber channel, give back, HA, high availability, interconnect, MetroCluster, mirror, NetApp, NVLOG, NVMEM, NVRAM, pair, plex, pool, RAID, replication, semi-sync, shelf, snapmirror, storage, stretched, sync, synchronous, SyncMirror, take over
Posted in NetApp | 3 Comments »
July 19, 2013
Overview
NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is forced to flush by a system timer.
To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. When data has been written to disks as part of so called Consistency Point (CP), write blocks which were cached in main memory become the first target to be evicted and replaced by other data.
Caching Approach
NetApp is frequently criticized for small amounts of write cache. For example FAS3140 has only 512MB of NVRAM, FAS3220 has a bit more 1,6GB. In mirrored HA or MetroCluster configurations NVRAM is mirrored via NVRAM interconnect adapter. Half of the NVRAM is used for local operations and another half for the partner’s. In this case the amount of write cache becomes even smaller. In FAS32xx series NVRAM has been integrated into main memory and is now called NVMEM. You can check the amount of NVRAM/NVMEM in your filer by running:
> sysconfig -a
The are two answers to the question why NetApp includes less cache in their controllers. The first one is given in white paper called “Optimizing Storage Performance and Cost with Intelligent Caching“. It states that NetApp uses different approach to write caching, compared to other vendors. Most often when data block comes in, cache is used to keep the 8KB data block, as well as 8KB inode and 8KB indirect block for large files. This way, write cache can be thought as part of the physical file system, because it mimics its structure. NetApp on the other hand uses journaling approach. When data block is received by the filer, 8KB data block is cached along with 120B header. Header contains all the information needed to replay the operation. After each cache flush Consistency Point (CP) is created, which is a special type of consistent file system snapshot. If controller fails, the only thing which needs to be done is reverting file system to the latest consistency point and replaying the log.
But this white paper was written in 2010. And cache journaling is not a feature unique to NetApp. Many vendors are now using it. The other answer, which makes more sense, was found on one of the toaster mailing list archives here: NVRAM weirdness (UNCLASSIFIED). I’ll just quote the answer:
The reason it’s so small compared to most arrays is because of WAFL. We don’t need that much NVRAM because when writes happen, ONTAP writes out single complete RAID stripes and calculates parity in memory. If there was a need to do lots of reads to regenerate parity, then we’d have to increase the NVRAM more to smooth out performance.
NVLOG Shipping
A feature called NVLOG shipping is an integral part of sync and semi-sync SnapMirror. NVLOG shipping is simply a transfer of NVRAM writes from the primary to a secondary storage system. Writes on primary cannot be transferred directly to NVRAM of the secondary system, because in contrast to mirrored HA and MetroCluster, SnapMirror doesn’t have any hardware implementation of the NVRAM mirroring. That’s why the stream of data is firstly written to the special files on the volume’s parent aggregate on the secondary system and then are read to the NVRAM.

Documents I found useful:
WP-7107: Optimizing Storage Performance and Cost with Intelligent Caching
TR-3326: 7-Mode SnapMirror Sync and SnapMirror Semi-Sync Overview and Design Considerations
TR-3548: Best Practices for MetroCluster Design and Implementation
United States Patent 7730153: Efficient use of NVRAM during takeover in a node cluster
Tags:battery-backed, block, cache, caching, cluster, Consistency Point, CP, data, file system, flash, flush, HA, high availability, journal, log, memory, MetroCluster, mirroring, NetApp, non-volatile, NVLOG, NVMEM, NVRAM, partner, PCIe, primary, secondary, semi-sync, shipping, snapmirror, snapshot, SSD, storage, sync
Posted in NetApp | 2 Comments »
July 1, 2013
If you run lun resize command on NetApp you might run into the following error:
lun resize: New size exceeds this LUN’s initial geometry
The reason behind it is that each SAN LUN has head/cylinder/sector geometry. It’s not an actual physical mapping to the underlying disks and has no meaning these days. It’s simply a SCSI protocol artifact. But it imposes limitation on maximum LUN resize. Geometry is chosen at initial LUN creation and cannot be changed. Roughly you can resize the LUN to the size, which is 10 times bigger than the size at the time of creation. For example, the 50GB LUN can be extended to the maximum of 502GB. See the table below for the maximum sizes:
Initial Size Maximum Size
< 50g 502g
51-100g 1004g
101-150g 1506g
151-200g 2008g
201-251g 2510g
252-301g 3012g
302-351g 3514g
352-401g 4016g
To check the maximum size for particular LUN use the following commands:
> priv set diag
> lun gemetry lun_path
> priv set
If you run into this issue, unfortunately you will need to create a new LUN, copy all the data using robocopy for example and make a cutover. Because such features as volume level SnapMirror or ndmpcopy will recreate the LUN’s geometry together with the data.
Tags:aggregate, cutover, Filer, geometry, limit, LUN, ndmpcopy, NetApp, resize, SAN, SCSI, snapmirror, volume
Posted in NetApp | Leave a Comment »
May 31, 2013
SnapMirroring to disaster recovery site requires huge amount of data to be transferred over the WAN link. In some cases replication can significantly lag from the defined schedule. There are two ways to reduce the amount of traffic and speed up replication: deduplication and compression.
If you apply deduplication to the replicated volumes, you simply reduce the amount of data you need to be transferred. You can read how to enable deduplication in my previous post.
Compression is a less known feature of SnapMirror. What it does is compression of the data being transferred on the source and decompression on the destination. Data inside the volume is left intact.
To enable SnapMirror compression you first need to make sure, that all your connections in snapmirror.conf file have names, like:
connection_name=multi(src_system,dst_system)
Then use ‘compression=enable’ configuration option to enable it for particular SnapMirror:
connection_name:src_vol dst_system:dst_vol compression=enable 0 2 * *
To check the compression ration after the transfer has been finished run:
> snapmirror status -l
And look at ‘Compression Ratio’ line:
Source: fas1:src
Destination: fas2:dest
Status: Transferring
Progress: 24 KB
Compression Ratio: 3.5 : 1
…
The one drawback of compression is an increased CPU load. Monitor your CPU load and if it’s too high, use compression selectively.
Tags:compression, CPU, deduplication, disaster recovery, DR, lag, NetApp, optimization, ratio, replication, schedule, snapmirror, transfer, WAN
Posted in NetApp | Leave a Comment »
October 9, 2011
Storage systems usually store data critical for organization like databases, mailboxes, employee files, etc. Typically you don’t provide access to NAS from Internet. If Filer has real IP address to provide CIFS or NFS access inside organization you can just close all incoming connections from outside world on frontier firewall. But what if networking engineer mess up firewall configuration? If you don’t take even simple security measures then all your organization data is at risk.
Here I’d like to describe basic means to secure NetApp Filer:
options rsh.enable off
options telnet.enable off
- Restrict SSH access to particular IP addresses. Take into consideration that if you enabled AD authentication Administrator user and Administrators group will implicitly have access to ssh.
options ssh.access host=ip_address_1,ip_address_2
- You can configure Filer to allow files access via HTTP protocol. If you don’t have HTTP license or you don’t use HTTP then disable it:
options http.enable off
- Even if you don’t have HTTP license you can access NetApp FilerView web interface to manage Filer. You can access it via SSL or plain connection, apparently SSL is more secure:
options http.admin.enable off
options http.admin.ssl.enable on
- Restrict access to FilerView:
options httpd.admin.access host=ip_address_1,ip_address_2
- If you don’t use SNMP then disable it:
options snmp.enable off
- I’m using NDMP to backup Filer’s data. It’s done through virtual network. I restrict NDMP to work only between Filers (we have two of them) and backup server and only through particular virtual interface:
On Filer1:
options ndmpd.access “host=backup_server_ip,filer2_ip_address AND if=interface_name”
options ndmpd.preferred_interface interface_name
On Filer2:
options ndmpd.access “host=backup_server_ip,filer1_ip_address AND if=interface_name”
options ndmpd.preferred_interface interface_name
- Disable other services you don’t use:
options snapmirror.enable off
options snapvault.enable off
- Module which is responsible for SSH and FilerView SSL connections is called SecureAdmin. You probably won’t need to configure it since it’s enabled by default. You can verify if ssh2 and ssl connections are enabled by:
secureadmin status
- Make sure all built-in users have strong passwords. You can list built-in users by:
useradmin user list
- By default Filer has home directory CIFS shares for all users. If you don’t use them, disable them by deleting:
/etc/cifs_homedir.cfg
- Filer also has ETC$ and C$ default shares. I’d highly recommend to restrict access to these shares only to local Filer Administrator user. In fact, if you enabled AD authentication then also domain Administrator user and Administrators group will implicitly have access to these shares, even if you don’t specify them in ACL. Delete all existing permissions and add:
cifs access share etc$ filer_system_name\Administrator Full Control
cifs access share c$ filer_system_name\Administrator Full Control
Basically this is it. Now you can say that you know hot to configure simple NetApp security.
Tags:active directory, AD, CIFS, Filer, filerview, firewall, http, httpd, NAS, ndmp, NetApp, NFS, rsh, secureadmin, security, snap-vault, snapmirror, snmp, ssh, ssl, storage, telnet
Posted in Hardware, NetApp | 11 Comments »