Posts Tagged ‘virtualization’

NetApp thin-provisioning for VMware LUNs

May 22, 2013

thin

LUN and Volume Thin Provisioning

I already described thin provisioning of VMware NFS volumes some time ago here. Now I want to discuss thin provisioning of LUNs.

LUNs are different from VMFS on top of NFS implementation, because LUN is an additional container inside of NetApp FlexVol. So if you’re using FC, you need to thin provision both LUN and volume:

> lun set reservation “/vol/targetvol/targetlun” disable
> vol options “targetvol” guarantee none

In fact, you can make the LUN thin and the volume thick. Then storage space that’s not used by the LUN, is returned to the volume level. But in this case it cannot be used by other volumes as a shared pool of space.

As the best practice, NetApp now recommends to set Fractional Reserve and Snap Reserve for your volumes to 0%. Don’t forget about that, if you want to save more storage space:

> vol options “targetvol” fractional_reserve 0
> snap reserve “targetvol” 0

Disable snapshots if you don’t use them:

> snap sched “targetvol” 0

It’s easy as that. Now you don’t waste your space by reserving it ahead, but use it as a shared pool of resources. But make sure to monitor aggregate free space. If you starting to run out of storage, plan purchase of new disks in advance or redistribute data between other aggregates.

Safety Features

Disabling volume level, LUN and snapshot reservations helps you to save storage space. The drawback of this approach is that you don’t have any mechanisms in place to prevent volume out-of-space situations. If you enable snapshots on the volume and they consume all the volume space, the volume goes offline. Very undesirable consequence. NetApp has two features that can serve as safety net in thin-provisioned environments: autosize and snap autodelete.

Snap autodelete automatically removes old snapshots if there is no space left inside the volume. Autosize, on the other hand, allows the volume to automatically grow to the specified limit (+20% to the volume size by default) in specified increments (5% of the volume size by the default). You can also specify what to do first autosize or snapshot autodelete by using ‘try_first’ option.

> snap autodelete “targetvol” on
> vol autosize “targetvol” on
> vol options “targetvol” try_first volume_grow

SnapMirror Considerations

If you use SnapMirroring and switch on the autosize on the source volume, then the destination volume won’t grow automatically. And SnapMirror will break the relationship if it runs out of space on the smaller destination volume. The trick here is to make the destination volume as big as the autosize limit for the source volume and thin provision the destination volume. By doing that you won’t run out of space on destination even if the source volume grows to its maximum.

Further reading

TR-3965: NetApp Thin Provisioning Deployment and Implementation Guide Data ONTAP 8.1 7-Mode

Advertisement

NetApp NDMP with Symantec BackupExec

March 16, 2012

Some time ago I uploaded a bunch of photos from the data center, where you can find our backup setup. We connect Sun StorageTek SL500 tape library directly to NetApp filer to perform backups of the virtual infrastructure using NDMP protocol. As opposed to LAN backup, NDMP allows you to offload LAN from backup traffic. Look at the following picture:

Here BackupExec only sends NDMP control commands to NDMP host, which in its turn send data to directly attached tape library. We use slightly more complicated 3-way backup architecture:

We have two filers in high availability cluster. And each of the filers has its own hard drive shelves and data. Filer under number 3 on the picture is the primary source of backup data and data from filer 2 is backed up occasionally. Since filer 2 has no connection to the library, when backup is initiated it is send via LAN from filer 2 to filer 3 and then to the tape library.

NetApp configuration

NDMP configuration involves several steps. First of all enable ndmpd on NetApp and set version 4, which Symantec BackupExec works with:

ndmpd on
ndmpd version 4

Then it’s a generally good idea to restrict NDMP access only to particular hosts and interface, because by default access is allowed from anywhere. In our setup NDMP traffic goes through completely isolated management network. We added two IP addresses to allowed hosts. First is the backup server and second is the partner filer:

options ndmpd.access hosts=ip_1,ip_2
options ndmpd.access if=manage_if

Then I’d recommend to create separate user for NMDP backups, change its group to Backup Operators and create special ndmp password which you will use to connect from BackupExec:

useradmin useradd backup
useradmin user modify backup -g “Backup Operators”
ndmpd password backup

As a last recommendation I suggest changing preferred network interface for data connections. By default for data traffic filer uses the same network interface from which it receives control commands. But if you have separate network for filer to filer communications its preferable to use it. In our configuration it’s the same management interface so for us it doesn’t make any difference:

options ndmpd.preferred_interface manage_if

Additionally you can use the following command to list your tape library robots:

storage show mc

Do the same configuration for all filers, if you have more than one.

BackupExec configuration

For NDMP to work in BackupExec you should obtain a licence key and install NDMP Option module. Then go to Devices section, click Add NDMP Server. In Add NDMP Server dialog box specify server name and logon account. If you have more than one filer, do it for each one.

That’s it. Now you have filer volumes in backup selection lists, tapes in Media section and you are ready to do backups.

NetApp thin provisioning for VMware

March 15, 2012

Thin provisioning is a popular buzzword, especially when it comes to NetApp. However, it can really save you time and headache in a number of situations. We use thin provisioning while presenting NFS volumes from NetApp to VI3 ESX hosts. Well, NetApp already let you change size of its FlexVol volumes on the fly. But you need to do it manually. Thin provisioning helps you to configure volumes so that in case of space shortage on a volume it will automatically expand without manual intervention. Of course you need to look after your volumes, otherwise they can fill all your storage space. But it will save you enough time to resolve data growth problem. Without thin provisioning in such situation your applications can easily crash.

NetApp doesn’t support iSCSI thin provisioning for VMware, so NFS is the only option. Don’t be afraid of performance issues. Without a doubt it’s slower than FC, but NetApp is famous for its NFS performance and it’s very well suited for mid-level workloads.

To be more specific, using thin provisioning you can create say 300GB virtual hard drive for particular VM and it will initially use no space. Then it will grow as long as you fill it. It can save you tremendous amount of storage space. Because you never exactly know ahead how much space you need. But be aware, if you will try to migrate thin provisioned virtual hard drive using storage migration plugin for VMware Virtual Center then it will fill all space. It means 300GB will use all 300GB even if it’s half-full.

The best article which will help you to integrate NetApp with VMware VI3 is NetApp TR-3428: NetApp and VMware Virtual Infrastructure 3 Storage Best Practices. What I will write here are basically excerpts from this article.

NetApp Configuration

Lets start from the NetApp configuration. First thing to do is to disable snapshots as usual. Generally it’s not a good idea to make snapshots of VMware virtual hard drives on the fly. They won’t be consistent. I will touch this topic in my later posts.

> snap sched <vol-name> 0 0 0
> snap reserve <vol-name> 0

Next step is to disable access time update on the volume, which is safe because VMware doesn’t rely on accurate access time for its files. It will increase performance, since Filer won’t need to update access time for files each time they are read or written.

> vol options <vol-name> no_atime_update on

Then configure the thin provisioning feature itself by switching volume auto size policy to on. It has two keys -m and -i. By -m you set maximum volume size and by -i you configure increment size.

> vol autosize <vol-name> [-m <size>[k|m|g|t]] [-i <size>[k|m|g|t]] on

NetApp recommends to disable Fractional Reserve for thin provisioned volumes, it’s just not needed anymore. Fractional Reserve guarantees successful writes to volumes in case you use snapshots. According to how snapshots work if you completely overwrite snapshot data you will use double amount of storage space. And it’s where Fractional Reserve comes into place. It reserves 100% of additional space for such cases. It means you will never run into situation when you are out of space due to active snapshots. But since we enabled auto size, our volume will resize on demand and Fractional Reserve becomes redundant. Supposedly auto size was implemented little bit later than Fractional Reserve and we have both of them in NetApp.

> vol options <vol-name> fractional_reserve 0

In case you use snapshots as a tool for instant VMware block level backups you can change auto delete policy. I said earlier that you should disable snapshot schedule, however you can manually (using scripts) create consistent snapshots. If you want to do that then you can additionally instruct NetApp to delete oldest snapshots when you are out of space on Filer and can’t auto grow volume.

> snap autodelete <vol-name> commitment try trigger volume target_free_space 5 delete_order oldest_first
> vol options <vol-name> try_first volume_grow

Now we need to create NFS export on NetApp Filer. It’s where FilerView interface comes handy. In short, you should give your ESX hosts read-write access, root access and configure Unix security style.

VMware Configuration

VMware configuration is trivial. Go to VMware Add Storage Wizard, select Network File System, then point to your NetApp filer and specify your volume path. Additionally NetApp recommends to tune NFS heartbeat parameters. Go to Host Configuration – Advanced Settings – NFS and for ESX 3.0 hosts change:

NFS.HeartbeatFrequency to 5 from 9
NFS.HeartbeatMaxFailures to 25 from 3

For ESX 3.5 hosts change:

NFS.HeartbeatFrequency to 12
NFS.HeartbeatMaxFailures to 10

There are much more information and tuning parameters that you might want to read about. Find some time to look through TR-3428 in case you need some clarifications or additional info.

VMware Update Manager failure

September 20, 2011

VMware Update Manager is the most annoying tool in VI3. Frequent uninformative VMWare Update Manager had a failure errors, large unreadable logs, regular problems with connection between ESX host and Update Manager, especially if server which hosts Virtual Center has several NICs, sporadic bugs in Update Manager which are usually solved by service restart.

Recently I got another VMWare Update Manager had a failure with the following description in C:\Documents and Settings\All Users\Application Data\VMware\VMware Update Manager\Logs\vmware-vci-log4cpp.log:

[2011-09-13 15:09:15:895 ‘VcTaskMonitor’ 5720 DEBUG] [vcTaskMonitor, 59] VcTaskMonitor destroyed for session[ACE9193D-026B-4E40-9436-548A2F7DD286]2EE1529C-2025-461F-8890-7C4A9DA02822
[2011-09-13 15:09:15:895 ‘InventoryMonitor’ 5824 WARN] [InventoryMonitor, 632] Unexpected filter: session[ACE9193D-026B-4E40-9436-548A2F7DD286]A096575F-8EC0-45A1-BDF9-A1128CFA639B
[2011-09-13 15:09:15:957 ‘SingleHostScanTask.SingleHostScanTask{8}’ 5720 ERROR] [vciTaskBase, 577] Task execution has failed: SingleHostScan : Platform Configuration Error: ERROR: Integrity Error!
Signature 0BFA1C860F0B0A6CF5CD5D2AEE7835B14789B619: keyExpired: 4789B619
ERROR: BundleID:None/Unknown
ERROR: File:/var/spool/esxupdate/contents.xml

This error is described in KB Article: 1030001. It says:

To continue applying patches on ESX 3.5 hosts, the secure key needs to be updated before June 1, 2011.

It means that if you didn’t apply this patch then all updates will fail starting from June 1, 2011. All VMware updates are signed and old key just expired.

To solve this issue download ESX350-201012410-BG and all prerequisites (it was ESX350-201012404 for me), SCP them to ESX host, unzip and install using –nosig option:

# esxupdate -b ESX350-201012410-BG –nosig update