Posts Tagged ‘block’

NetApp NVRAM and Write Caching

July 19, 2013

388375Overview

NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is forced to flush by a system timer.

To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. When data has been written to disks as part of so called Consistency Point (CP), write blocks which were cached in main memory become the first target to be evicted and replaced by other data.

Caching Approach

NetApp is frequently criticized for small amounts of write cache. For example FAS3140 has only 512MB of NVRAM, FAS3220 has a bit more 1,6GB. In mirrored HA or MetroCluster configurations NVRAM is mirrored via NVRAM interconnect adapter. Half of the NVRAM is used for local operations and another half for the partner’s. In this case the amount of write cache becomes even smaller. In FAS32xx series NVRAM has been integrated into main memory and is now called NVMEM. You can check the amount of NVRAM/NVMEM in your filer by running:

> sysconfig -a

The are two answers to the question why NetApp includes less cache in their controllers. The first one is given in white paper called “Optimizing Storage Performance and Cost with Intelligent Caching“. It states that NetApp uses different approach to write caching, compared to other vendors. Most often when data block comes in, cache is used to keep the 8KB data block, as well as 8KB inode and 8KB indirect block for large files. This way, write cache can be thought as part of the physical file system, because it mimics its structure. NetApp on the other hand uses journaling approach. When data block is received by the filer, 8KB data block is cached along with 120B header. Header contains all the information needed to replay the operation. After each cache flush Consistency Point (CP) is created, which is a special type of consistent file system snapshot. If controller fails, the only thing which needs to be done is reverting file system to the latest consistency point and replaying the log.

But this white paper was written in 2010. And cache journaling is not a feature unique to NetApp. Many vendors are now using it. The other answer, which makes more sense, was found on one of the toaster mailing list archives here: NVRAM weirdness (UNCLASSIFIED). I’ll just quote the answer:

The reason it’s so small compared to most arrays is because of WAFL. We don’t need that much NVRAM because when writes happen, ONTAP writes out single complete RAID stripes and calculates parity in memory. If there was a need to do lots of reads to regenerate parity, then we’d have to increase the NVRAM more to smooth out performance.

NVLOG Shipping

A feature called NVLOG shipping is an integral part of sync and semi-sync SnapMirror. NVLOG shipping is simply a transfer of NVRAM writes from the primary to a secondary storage system.  Writes on primary cannot be transferred directly to NVRAM of the secondary system, because in contrast to mirrored HA and MetroCluster, SnapMirror doesn’t have any hardware implementation of the NVRAM mirroring. That’s why the stream of data is firstly written to the special files on the volume’s parent aggregate on the secondary system and then are read to the NVRAM.

nvram

Documents I found useful:

WP-7107: Optimizing Storage Performance and Cost with Intelligent Caching

TR-3326: 7-Mode SnapMirror Sync and SnapMirror Semi-Sync Overview and Design Considerations

TR-3548: Best Practices for MetroCluster Design and Implementation

United States Patent 7730153: Efficient use of NVRAM during takeover in a node cluster

Advertisement

NetApp Reallocate

May 24, 2013

top-defragment-2The smallest addressable block of data in Data ONTAP is 4k. However, all data is written to volumes in 256k chunks. When data block which is bigger than 256k comes in, filer searches for contiguous 256k of free space in the file system. If it’s found, data block is written into it, if not then filer splits the data block and puts it in several places. It’s called fragmentation and is familiar to everyone from the times, when FAT files ystems were in use. It’s not a big issue in modern file systems, like NTFS or WAFL, but defragmentation can help to solve performance problems in some situations.

In mostly random read/write environments (which is quite common these days) fragmentation has no impact on performance. If you write or read data from random places of the hard drive it doesn’t matter if this data is random or sequential on the physical media. NetApp recommends to consider defragmentation for the applications with sequential read type of workload:

  • Online transaction processing databases that perform large table scans
  • E-mail systems that use database storage with verification processes
  • Host-side backup of LUNs

Reallocation process uses thresholds values to represent the file system layout optimization level, where 4 is normal and everything bigger than 10 is not optimal.

To check the level of optimization for particular volume use:

> reallocate measure –o /vol/vol_name

If you decide to run reallocate on the volume, run:

> reallocate start –f /vol/vol_name

There are certain considerations if you’re using snapshots or deduplication on volumes. There is a “-p” option, to prevent inflating snapshots during reallocate. And from version 8.1 Data ONTAP also supports reallocation of deduplicated volumes. Consult official documentation for additional information.

Further reading:

TR-3929: Reallocate Best Practices