Posts Tagged ‘optimization’

NetApp SnapMirror Optimization

May 31, 2013

gzipSnapMirroring to disaster recovery site requires huge amount of data to be transferred over the WAN link. In some cases replication can significantly lag from the defined schedule. There are two ways to reduce the amount of traffic and speed up replication: deduplication and compression.

If you apply deduplication to the replicated volumes, you simply reduce the amount of data you need to be transferred. You can read how to enable deduplication in my previous post.

Compression is a less known feature of SnapMirror. What it does is compression of the data being transferred on the source and decompression on the destination. Data inside the volume is left intact.

To enable SnapMirror compression you first need to make sure, that all your connections in snapmirror.conf file have names, like:

connection_name=multi(src_system,dst_system)

Then use ‘compression=enable’ configuration option to enable it for particular SnapMirror:

connection_name:src_vol dst_system:dst_vol compression=enable 0 2 * *

To check the compression ration after the transfer has been finished run:

> snapmirror status -l

And look at ‘Compression Ratio’ line:

Source: fas1:src
Destination: fas2:dest
Status: Transferring
Progress: 24 KB
Compression Ratio: 3.5 : 1

The one drawback of compression is an increased CPU load. Monitor your CPU load and if it’s too high, use compression selectively.

NetApp Reallocate

May 24, 2013

top-defragment-2The smallest addressable block of data in Data ONTAP is 4k. However, all data is written to volumes in 256k chunks. When data block which is bigger than 256k comes in, filer searches for contiguous 256k of free space in the file system. If it’s found, data block is written into it, if not then filer splits the data block and puts it in several places. It’s called fragmentation and is familiar to everyone from the times, when FAT files ystems were in use. It’s not a big issue in modern file systems, like NTFS or WAFL, but defragmentation can help to solve performance problems in some situations.

In mostly random read/write environments (which is quite common these days) fragmentation has no impact on performance. If you write or read data from random places of the hard drive it doesn’t matter if this data is random or sequential on the physical media. NetApp recommends to consider defragmentation for the applications with sequential read type of workload:

  • Online transaction processing databases that perform large table scans
  • E-mail systems that use database storage with verification processes
  • Host-side backup of LUNs

Reallocation process uses thresholds values to represent the file system layout optimization level, where 4 is normal and everything bigger than 10 is not optimal.

To check the level of optimization for particular volume use:

> reallocate measure –o /vol/vol_name

If you decide to run reallocate on the volume, run:

> reallocate start –f /vol/vol_name

There are certain considerations if you’re using snapshots or deduplication on volumes. There is a “-p” option, to prevent inflating snapshots during reallocate. And from version 8.1 Data ONTAP also supports reallocation of deduplicated volumes. Consult official documentation for additional information.

Further reading:

TR-3929: Reallocate Best Practices