Posts Tagged ‘job’

Magic behind NetApp VSC Backup/Restore

June 12, 2013

netapp_dpNetApp Virtual Storage Console is a plug-in for VMware vCenter which provides capabilities to perform instant backup/restore using NetApp snapshots. It uses several underlying NetApp features to accomplish its tasks, which I want to describe here.

Backup Process

When you configure a backup job in VSC, what VSC does, is it simply creates a NetApp snapshot for a target volume on a NetApp filer. Interestingly, if you have two VMFS datastores inside one volume, then both LUNs will be snapshotted, since snapshots are done on the volume level. But during the datastore restore, the second volume will be left intact. You would think that if VSC reverts the volume to the previously made snapshot, then both datastores should be affected, but that’s not the case, because VSC uses Single File SnapRestore to restore the LUN (this will be explained below). Creating several VMFS LUNs inside one volume is not a best practice. But it’s good to know that VSC works correctly in this case.

Same thing for VMs. There is no sense of backing up one VM in a datastore, because VSC will make a volume snapshot anyway. Backup the whole datastore in that case.

Datastore Restore

After a backup is done, you have three restore options. The first and least useful kind is a datastore restore. The only use case for such restore that I can think of is disaster recovery. But usually disaster recovery procedures are separate from backups and are based on replication to a disaster recovery site.

VSC uses NetApp’s Single File SnapRestore (SFSR) feature to restore a datastore. In case of a SAN implementation, SFSR reverts only the required LUN from snapshot to its previous state instead of the whole volume. My guess is that SnapRestore uses LUN clone/split functionality in background, to create new LUN from the snapshot, then swap the old with the new and then delete the old. But I haven’t found a clear answer to that question.

For that functionality to work, you need a SnapRestore license. In fact, you can do the same trick manually by issuing a SnapRestore command:

> snap restore -t file -s nightly.0 /vol/vol_name/vmfs_lun_name

If you have only one LUN in the volume (and you have to), then you can simply restore the whole volume with the same effect:

> snap restore -t vol -s nightly.0 /vol/vol_name

VM Restore

VM restore is also a bit controversial way of restoring data. Because it completely removes the old VM. There is no way to keep the old .vmdks. You can use another datastore for particular virtual hard drives to restore, but it doesn’t keep the old .vmdks even in this case.

VSC uses another mechanism to perform VM restore. It creates a LUN clone (don’t confuse with FlexClone,which is a volume cloning feature) from a snapshot. LUN clone doesn’t use any additional space on the filer, because its data is mapped to the blocks which sit inside the snapshot. Then VSC maps the new LUN to the ESXi host, which you specify in the restore job wizard. When datastore is accessible to the ESXi host, VSC simply removes the old VMDKs and performs a storage vMotion from the clone to the active datastore (or the one you specify in the job). Then clone is removed as part of a clean up process.

The equivalent cli command for that is:

> lun clone create /vol/clone_vol_name -o noreserve -b /vol/vol_name nightly.0

Backup Mount

Probably the most useful way of recovery. VSC allows you to mount the backup to a particular ESXi host and do whatever you want with the .vmdks. After the mount you can connect a virtual disk to the same or another virtual machine and recover the data you need.

If you want to connect the disk to the original VM, make sure you changed the disk UUID, otherwise VM won’t boot. Connect to the ESXi console and run:

# vmkfstools -J setuuid /vmfs/volumes/datastore/VM/vm.vmdk

Backup mount uses the same LUN cloning feature. LUN is cloned from a snapshot and is connected as a datastore. After an unmount LUN clone is destroyed.

Some Notes

VSC doesn’t do a good cleanup after a restore. As part of the LUN mapping to the ESXi hosts, VSC creates new igroups on the NetApp filer, which it doesn’t delete after the restore is completed.

What’s more interesting, when you restore a VM, VSC deletes .vmdks of the old VM, but leaves all the other files: .vmx, .log, .nvram, etc. in place. Instead of completely substituting VM’s folder, it creates a new folder vmname_1 and copies everything into it. So if you use VSC now and then, you will have these old folders left behind.

Advertisement

GFS backup scheme in Symantec Backup Exec

March 23, 2012

Grandfather-Father-Son is an industry standard backup scheme, where you have 5 daily backups, 5 weekly backups and as many monthly as you need. Symantec Backup Exec has prebuilt policy for GFS, but before going into configuring backup scheme itself, lets talk a little bit about general backup job configuration in Backup Exec.

Basic Terminology

Inside user interface you see Jobs, Policies, Selection Lists and Media Sets. First of all you need to create Selection List, which describes what you want to backup. There you select files and folders from your Windows, Unix or NDMP servers. Then you create Media Set, which is a collection of tapes with particular append and retention periods. Append period specifies how long data can be added to the same tape and retention period tells for how long data cannot be overwritten. Retention period starts form the time of last append to the tape. Then you create Policy. Policy, by means of templates, defines when backup jobs are run, where backups are stored and what is the type of backup – incremental, differential or full. One policy can consist of several templates. In template you specify backup date and time, as well as target tape library.

GFS Implementation

Backup Exec has a template for GFS backup rotation scheme. Click “New policy using wizard”, choose GFS scheme and then select schedule, target backup device and media sets for daily, weekly and monthly backups. By default Backup Exec suggests the following configuration.

Three tape media sets:

  • Daily Media Set – 1 week overwrite, 1 week append
  • Weekly Media Set – 5 weeks overwrite, 5 weeks append
  • Monthly Media Set – 1 year overwrite, 1 year append

Policy with three templates:

  • Daily Backup – Monday to Friday, Incremental
  • Weekly Backup – every Friday, Full
  • Monthly Backup – first Saturday of each month, Full

Also Backup Exec automatically creates rules to resolve conflicts. For example when both Daily and Weekly backups try to run on Friday, jobs do not conflict, because weekly backups always supersede daily. Same for monthly.

I personally prefer another schedule. First of all, if you run your jobs after midnight, you will need to shift your schedules from Mon – Fri to Tue – Sat. Additionally, I run monthly backup on the first Saturday of the month. Backup Exec by default (taking into consideration my one day shift) would suggest first Sunday for the monthly backup. However, it doesn’t make much sense to have weekly on Saturday and then monthly next day on Sunday. You would just consume more space without any benefit. Also, you can schedule monthly on the last Saturday of the month, but if the last day is Thursday, for example, then you will loose four business days from your monthly backup.

After the policy is created, you need to create backup jobs using this policy by clicking on New jobs using policy. All three jobs will be created automatically according to Selection List, as well as Policy Schedule, Target, and Backup Type parameters.

I’d also recommend everyone to configure notifications. There are general Alerts properties as well as inside each job.

Installing Symantec Backup Exec Agent for Linux

October 7, 2011

Symantec Backup Exec Linux/Unix agent is called RALUS which stands for Remote Agent for Linux and Unix Servers. I obtained my RALUS installation from official Symantec CDs. If you don’t have them you probably can download them from Symantec web site. Here is the sequence:

  1. Mount CD or iso image to your Linux host.
  2. Run ./installralus script and follow instructions. I use defaults. The only thing you should enter is Media Server IP address. Installation script add itself to rc*.d levels automatically.
  3. After installations is completed create backup user, add it to beoper group and set its password: # useradd backup -c “User for Symantec Backup Exec”;  # usermod -G beoper backup; # passwd backup.
  4. Start BE agent manually for the first time: # /etc/init.d/VRTSralus.init start

That’s it. Now you can see your server under Linux/Unix Servers section when creating backup job.

Add #1: If agent doesn’t start and you get an error with libstdc++.so.5 missing in /var/VRTSralus/beremote.service.log then install compat-libstdc++-33.

Add #2: If you have active firewall then you need to open additional ports. For me it was tcp 10000-10200. It’s 10000 plus port range you can find on media server in Tools->Options->Network and Security tab. For CentOS firewall rule would be:

-A RH-Firewall-1-INPUT -m tcp -p tcp -s media_server_ip –dport 10000:10200 -j ACCEPT

Add #3: In case you also write firewall rules to OUTPUT chain then open output tcp 10000:

-A RH-Firewall-1-OUTPUT -m tcp -p tcp -d media_server_ip –dport 10000 -j ACCEPT

If you don’t have RH-Firewall-1-OUTPUT add also:

:RH-Firewall-1-OUTPUT – [0:0]
-A OUTPUT -j RH-Firewall-1-OUTPUT

I leave possibility of me being wrong, but SBE documentation says:

Symantec recommends having port 10000 open and available on the Backup Exec media
server as well as on the remote systems.

Additional connections from the media server to the Remote Agent will be initiated on any available port.

I understand that as both agent and media server may connect to each other’s 10000 port and additional 10001:10200 connections are initiated from medias server.