Posts Tagged ‘schedule’

Scripted CIFS Shares Migration

March 8, 2018

I don’t usually blog about Windows Server and Microsoft products in general, but the need for file server migration comes up in my work quite frequently, so I thought I’d make a quick post on that topic.

There are many use cases, it can be migration from a NAS storage array to a Windows Server or between an on-premises file server and cloud. Every such migration involves copying data and recreating shares. Doing it manually is almost impossible, unless you have only a handful of shares. If you want to replicate all NTFS and share-level permissions consistently from source to destination, scripting is almost the only way to go.

Copying data

I’m sure there are plenty of tools that can perform this task accurately and efficiently. But if you don’t have any special requirements, such as data at transit encryption, Robocopy is probably the simplest tool to use. It comes with every Windows Server installation and starting from Windows Server 2008 supports multithreading.

Below are the command line options I use:

robocopy \\file_server\source_folder D:\destination_folder /E /ZB /DCOPY:T /COPYALL /R:1 /W:1 /V /TEE /MT:128 /XD “System Volume Information” /LOG:D:\robocopy.log

Most of them are common, but there are a few worth pointing out:

  • /MT – use multithreading, 8 threads per Robocopy process by default. If you’re dealing with lots of small files, this can significantly improve performance.
  • /R:1 and /W:1 – Robocopy doesn’t copy locked files to avoid data inconsistencies. Default behaviour is to keep retrying until the file is unlocked. It’s important for the final data synchronisation, but for data seeding I recommend one retry and one second wait to avoid unnecessary delays.
  • /COPYALL and /DCOPY:T will copy all file and directory attributes, permissions, as well as timestamps.
  • /XD “System Volume Information” is useful if you’re copying an entire volume. If you don’t exclude the System Volume Information folder, you may end up copying deduplication and DFSR data, which in addition to wasting disk space, will break these features on the destination server.

Robocopy is typically scheduled to run at certain times of the day, preferably after hours. You can put it in a batch script and schedule using Windows Scheduler. Just keep in mind that if you specify the job to stop after running for a certain amount of hours, Windows Scheduler will stop only the batch script, but the Robocopy process will keep running. As a workaround, you can schedule another job with the following command to kill all Robocopy processes at a certain time of the day, say 6am in the morning:

taskkill /f /im robocopy.exe

Duplicating shares

For copying CIFS shares I’ve been using “sharedup” utility from EMC’s “CIFS Tools” collection. To get the tool, register a free account on https://support.emc.com. You can do that even if you’re not an EMC customer and don’t own an EMC storage array. From there you will be able to search for and download CIFT Tools.

If your source and destination file servers are completely identical, you can use sharedup to duplicate CIFS shares in one command. But it’s rarely the case. Often you want to exclude some of the shares or change paths if your disk drives or directory structure have changed. Sharedup supports input and output file command line options. You can generate a shares list first, which you can edit and then import shares to the destination file server.

To generate the list of shares first run:

sharedup \\source_server \\destination_server ALL /SD /LU /FO:D:\shares.txt /LOG:D:\sharedup.log

Resulting file will have records similar to this:

#
@Drive:E
:Projects ;Projects ;C:\Projects;
#
@Drive:F
:Home;Home;C:\Home;

Delete shares you don’t want to migrate and update target path from C:\ to where your data actually is. Don’t change “@Drive:E” headers, they specify location of the source share, not destination. Also worth noting that you won’t see permissions listed anywhere in this file. This file lists shares and share paths only, permissions are checked and copied at runtime.

Once you’re happy with the list, use the following command to import shares to the destination file server:

sharedup \\source_server \\destination_server ALL /R /SD /LU /FI:D:\shares.txt /LOG:D:\sharedup.log

For server local users and groups, sharedup will check if they exist on destination. So if you run into an error similar to the following, make sure to first create those groups on the destination file server:

10:13:07 : WARNING : The local groups “WinRMRemoteWMIUsers__” and “source_server_WinRMRemoteWMIUsers__” do not exist on the \\destination_server server !
10:13:09 : WARNING : Please use lgdup utility to duplicate the missing local user(s) or group(s) from \\source_server to \\destination_server.
10:13:09 : WARNING : Unable to initialize the Security Descriptor translator

Conclusion

I created this post as a personal howto note, but I’d love to hear if it’s helped anyone else. Or if you have better tool suggestions to accomplish this task, please let me know!

Advertisement

AWS Cloud Protection Manager Part 3: Backup and Restore

August 21, 2017

Backup

Backups are created according to the schedule specified in the backup policy. We discussed how to configure backup policies in the previous blog post of the series. The list of backups you see on the Backup Monitor tab are your restore points. Backups that are older then the specified retention policy will be purged from the list and you will not see them there, unless you move them to “Freezer”.

It is important to understand that apart from volume snapshots, for each backed up instance CPM also creates an AMI. Those who has hands-on experience with AWS may already know, that AMIs is the only way to create clones of Windows EC2 instances in AWS. If you go to AWS console and try to find a clone action under the instance Action menu, you won’t find any. You will have “Create Image” instead. It creates an AMI, from which you can then spin up a clone of an instance the image was created from.

CPM does exactly that. For each backup policy the instance is under, it creates one AMI. In our example we have four backup policies, that will result in four AMIs for each of the instances. Every AMI has to have at least one storage volume. So CPM will include the root volume of each instance into AMI, just because it has to. But AMIs are required only to restore EC2 instance configuration. Data is restored from volume snapshots, that can be used to create new volumes from them and then attach them to the instance. You can click on the View button under Snapshots to find the corresponding snapshot and AMI IDs.

There is a backup log for each job run as well that is helpful for issue troubleshooting.

Restore

To perform a restore click on the Recover button next to the backup job and you will get the list of the instances you can recover. CPM offers you three options: instance recovery, volume recovery and file recovery. Let’s go back to front.

File recovery is probably the most used recovery option. As it lets you restore individual files. When you click on the “Explore” button, CPM creates new volumes from the snapshots you are restoring from and mount them to the CPM instance. You are then presented with a simple file system browser where you can find the file and click on the green down arrow icon in Download column to save the file to your computer.

If you click on “Volume Only”, you can restore particular volumes. Restored volumes are not attached to any instance, unless you specify it under “Attach to Instance” column. You can then select under “Attach Behaviour” what CPM should do if such volume is already attached to the instance or if you want to automatically detach the original volume, but the instance is running (you can do it only if instance is stopped).

And the last option is “Instance”. It will create a clone of the original instance using the pre-generated AMI and volume snapshots, as we discussed in the Backup section of this blog post. You can specify many options under Advanced Options section, including recovery to another VPC or different availability zone. If anything, make sure you specify a new IP address for the instance, otherwise you’ll have a conflict and your restore will fail. Ideally you should also shut down the original EC2 instance before spinning up a restore clone.

Advanced Features

There are quite a few worth mentioning. So far we have looked at simple EC2 instance restore. But you don’t have to backup whole instances, you can also backup individual volumes. On top of that, CPM supports RDS database, Aurora and Redshift cluster backups.

If you run MS Exchange, Sharepoint or SQL on your EC2 instances, you can install CPM backup agent on them to ensure you have application-consistent backups via VSS, as opposed to crash-consistent backups you get if agent is not used. If you install the agent, you can also run a script on the instance before and after the backup is taken.

Last but not least is DR. Restoring to another availability zone within the region is already supported on instance recovery level. You can choose availability zone you want to restore to. It is not possible to recover to another region, though. Because AWS snapshots and AMIs are local to the region they are created in. If you want to be able to recover to another region, you can configure DR in CPM, which will utilise AWS AMI and snapshot copy functionality to copy backups to another region at configured frequency.

Conclusion

Overall, I found Cloud Protection Manager very easy to install, configure and use. If you come from infrastructure background, at first glance CPM may look to you like a very basic tool, compared to such feature-rich solutions like Veeam or Commvault. But that feeling is misleading. CPM is simple, because AWS simple. All infrastructure complexity is hidden under the covers. As a result, all AWS backup tools need to do is create snapshots and CPM does it well.

AWS Cloud Protection Manager Part 2: Configuration

August 14, 2017

Overview

As we discussed in Part 1 of this series, snapshots serve as a good basis to implement backup in AWS. But AWS does not provide an out-of-the-box tool that can manage snapshots at scale and perform snapshot creation/deletion based on a defined retention. Rich AWS APIs allow you to build such tool yourself or you can use an existing backup solution built for AWS. In this blog post we are looking at one such product, called Cloud Protection Manager.

You will be surprised to know that the first version of Cloud Protection Manager was released back in 2013. The product has matured over the years and the current CPM version 2.1 according to N2W web-site has become quite popular amongst AWS customers.

CPM is offered in four different versions: Standard, Advanced, Enterprise and Enterprise Plus. Functionality across all four versions is mostly the same, with the key difference being the number of instances you can backup. Ranging from 20 instances in Standard and $5 per instance in Enterprise Plus.

Installation

CPM offers a very straightforward consumption model. You purchase it from AWS marketplace and pay by the month. Licensing costs are billed directly to your account. There are no additional steps involved.

To install CPM you need to find the version you want to purchase in AWS Marketplace, specify instance settings, such as region, VPC subnet, security group, then accept the terms and click launch. AWS will spin up a new CPM server as an EC2 instance for you. You also have an option to run a 30-day trial if you want to play with the product before making a purchasing decision.

Note that CPM needs to be able to talk to AWS API endpoints to perform snapshots, so make sure that the appliance has Internet access by means of a public IP address, Elastic IP address or a NAT gateway. Similarly, the security group you attach it to should at least have HTTPS out allowed.

Initial Configuration

Appliance is then configured using an initial setup wizard. Find out what private IP address has been assigned to the instance and open a browser session to it. The wizard is reasonably straightforward, but there are two things I want to draw your attention to.

You will be asked to create a data volume. This volume is needed purely to keep CPM configuration and metadata. Backups are kept in S3 and do not use this volume. The default size is 5GB, which is enough for roughly 50 instances. If you have a bigger environment allocate 1GB per every 10 AWS instances.

You will also need to specify AWS credentials for CPM to be able to talk to AWS APIs. You can use your AWS account, but this is not a security best practice. In AWS you can assign a role to an EC2 instance, which is what you should be using for CPM. You will need to create IAM policies that essentially describe permissions for CPM to create backups, perform restores, send notifications via AWS SNS and configure EC2 instances. Just refer to CPM documentation, copy and paste configuration for all policies, create a role and specify the role in the setup wizard.

Backup Policies

Once you are finished with the initial wizard you will be able to log in to the appliance using the password you specified during installation. As in most backup solutions you start with backup policies, which allow you to specify backup targets, schedule and retention.

One thing that I want to touch on here is backup schedules, that may be a bit confusing at start. It will be easier to explain it in an example. Say you want to implement a commonly used GFS backup schedule, with 7 daily, 4 weekly, 12 monthly and 7 yearly backups. Daily backup should run every day at 8pm and start from today. Weekly backups run on Sundays.

This is how you would configure such schedule in CPM:

  • Daily
    • Repeats Every: 1 Days
    • Start Time: Today Date, 20:00
    • Enabled on: Mon-Sat
  • Weekly
    • Repeats Every: 1 Weeks
    • Start Time: Next Sunday, 20:00
    • Enabled on: Mon-Sun
  • Monthly
    • Repeats Every: 1 Months
    • Start Time: 28th of this month, 21:00
    • Enabled on: Mon-Sun
  • Yearly:
    • Repeats Every: 12 Months
    • Start Time: 31st of December, 22:00
    • Enabled on: Mon-Sun

Some of the gotchas here:

  • “Enabled on” setting is relevant only to the Daily backup, the rest of the schedules are based on the date you specify in “Start Time” field. For instance, if you specify a date in the Weekly backup Start Time that is a Sunday, your weekly backups will run every Sunday.
  • Make sure to run your Monthly backup on 28th day of every month, to guarantee you have a backup every month, including February.
  • It’s not possible to prevent Weekly backup to not run on the last week of every month. So make sure to adjust the Start Time for the Monthly backup so that Weekly and Monthly backups don’t run at the same time if they happen to fall on the same day.
  • Same considerations are true for the Yearly backup as well.

Then you create your daily, weekly, monthly and yearly backup policies using the corresponding schedules and add EC2 instances that require protection to every policy. Retention is also specified at the policy level. According to our scenario we will have 6 generations for Daily, 4 generations for Weekly, 12 generations for Monthly and 7 generations for Yearly.

Notifications

CPM uses AWS Simple Notification Service (SNS) to send email alerts. If you gave CPM instance SNS permissions in IAM role you created previously, you should be able to simply go to Notification settings, enable Alerts and select “Create new topic” and “Add user email as recipient” options. CPM will create a SNS topic in AWS for you automatically and use email address you specified in the setup wizard to send notifications to. You can change or add more email addresses to the SNS topic in AWS console later on if you need to.

Conclusion

This is all you need to get your Cloud Protection Manager up and running. In the next blog post we will look at how instances are backed up and restored and discuss some of the advanced backup options CPM offers.

NetApp SnapMirror Optimization

May 31, 2013

gzipSnapMirroring to disaster recovery site requires huge amount of data to be transferred over the WAN link. In some cases replication can significantly lag from the defined schedule. There are two ways to reduce the amount of traffic and speed up replication: deduplication and compression.

If you apply deduplication to the replicated volumes, you simply reduce the amount of data you need to be transferred. You can read how to enable deduplication in my previous post.

Compression is a less known feature of SnapMirror. What it does is compression of the data being transferred on the source and decompression on the destination. Data inside the volume is left intact.

To enable SnapMirror compression you first need to make sure, that all your connections in snapmirror.conf file have names, like:

connection_name=multi(src_system,dst_system)

Then use ‘compression=enable’ configuration option to enable it for particular SnapMirror:

connection_name:src_vol dst_system:dst_vol compression=enable 0 2 * *

To check the compression ration after the transfer has been finished run:

> snapmirror status -l

And look at ‘Compression Ratio’ line:

Source: fas1:src
Destination: fas2:dest
Status: Transferring
Progress: 24 KB
Compression Ratio: 3.5 : 1

The one drawback of compression is an increased CPU load. Monitor your CPU load and if it’s too high, use compression selectively.

Unexpected Deduplication Impact on VMware I/O Latency

May 28, 2013

NetApp deduplication is a postponed process. During normal operation Data ONTAP only calculates hashes for the data blocks. Actual deduplication is carried out off-hours as per configured schedule. Hash calculation doesn’t affect performance in most cases. I talked about that in my previous post. NetApp states in its documentation that deduplication is a low-priority process:

When one deduplication process is running, there is 0% to 15% performance degradation on other applications.

Once I faced a situation when deduplication was configured to be carried out during business hours on one of the volumes. No one noticed that at some point volume run out of space and Data ONTAP wasn’t able to perform deduplication from that time. Situation became worse when Data ONTAP was upgraded from version 7.3.2 to 8.1.0. Now during deduplication filer tried to upgrade the fingerprint metadata to a new version at 15:00 every day with the message: “Fingerprint is being upgraded” and failed. It seems that the metadata upgrade is a very resource-intensive process and heavily affects I/O latency.

This volume was not a VMware datastore, but it sit on the same aggregate together with the several VMFS LUNs. Here what happened to the VMware I/O latency every day at 15:00 (click to enlarge):

dedup_issue_ed

I deleted the host name and the datastores names from the graph. You can see the large latency spike, which won’t turn yourVMs into kernel panic, but it’s not the thing you would want your production environment to experience every day.

The solution was simple. After space was increased on this volume, deduplication metadata upgrade performed successfully and problem went away. Additionally, deduplication was shifted to off-hours.

The simple lesson to learn: don’t schedule deduplication during the day, you never know what could possibly go wrong.

NetApp thin-provisioning for VMware LUNs

May 22, 2013

thin

LUN and Volume Thin Provisioning

I already described thin provisioning of VMware NFS volumes some time ago here. Now I want to discuss thin provisioning of LUNs.

LUNs are different from VMFS on top of NFS implementation, because LUN is an additional container inside of NetApp FlexVol. So if you’re using FC, you need to thin provision both LUN and volume:

> lun set reservation “/vol/targetvol/targetlun” disable
> vol options “targetvol” guarantee none

In fact, you can make the LUN thin and the volume thick. Then storage space that’s not used by the LUN, is returned to the volume level. But in this case it cannot be used by other volumes as a shared pool of space.

As the best practice, NetApp now recommends to set Fractional Reserve and Snap Reserve for your volumes to 0%. Don’t forget about that, if you want to save more storage space:

> vol options “targetvol” fractional_reserve 0
> snap reserve “targetvol” 0

Disable snapshots if you don’t use them:

> snap sched “targetvol” 0

It’s easy as that. Now you don’t waste your space by reserving it ahead, but use it as a shared pool of resources. But make sure to monitor aggregate free space. If you starting to run out of storage, plan purchase of new disks in advance or redistribute data between other aggregates.

Safety Features

Disabling volume level, LUN and snapshot reservations helps you to save storage space. The drawback of this approach is that you don’t have any mechanisms in place to prevent volume out-of-space situations. If you enable snapshots on the volume and they consume all the volume space, the volume goes offline. Very undesirable consequence. NetApp has two features that can serve as safety net in thin-provisioned environments: autosize and snap autodelete.

Snap autodelete automatically removes old snapshots if there is no space left inside the volume. Autosize, on the other hand, allows the volume to automatically grow to the specified limit (+20% to the volume size by default) in specified increments (5% of the volume size by the default). You can also specify what to do first autosize or snapshot autodelete by using ‘try_first’ option.

> snap autodelete “targetvol” on
> vol autosize “targetvol” on
> vol options “targetvol” try_first volume_grow

SnapMirror Considerations

If you use SnapMirroring and switch on the autosize on the source volume, then the destination volume won’t grow automatically. And SnapMirror will break the relationship if it runs out of space on the smaller destination volume. The trick here is to make the destination volume as big as the autosize limit for the source volume and thin provision the destination volume. By doing that you won’t run out of space on destination even if the source volume grows to its maximum.

Further reading

TR-3965: NetApp Thin Provisioning Deployment and Implementation Guide Data ONTAP 8.1 7-Mode

Reclaim NetApp snapshot space

March 5, 2012

Recently I needed to configure big NFS share for HPC cluster users. Proposed use of this share is research data which rapidly changes and becomes obsolete very quickly. Primary storage on site is NetApp which by default reserves large portion (20%) of volume for snapshot. They are not practical for such usage scenario, so here is the quick tip on how to reclaim snapshot reserve for users usage.

snap sched volname 0 0 0
snapdelete -a volname
snapreserve volname 0

In first command we disable snapshot creation, then delete all already created snapshots and finally disable volume reservation for snapshots.

Why I wouldn’t recommend Microsoft Data Protection Manager as a backup solution

February 22, 2012

When taking into account different backup solutions which are in the market, MS DPM 2010 was rather attractive for us. It’s a solution from leading software vendor and it’s cheap. However, DPM has a number of limitations which forced us to abandon it and rebuild our backup procedures from ground up. Here I’d like to describe major points as impartial as I can.

The first thing about DPM is that it uses VSS snapshots as one and only backup method. The major consequence is that you are very limited in flexibility and cannot do incremental, differential or full backups, implement GFS backup strategy or Progressive Paradigm. The only option you have is to exclude weekends or any other particular days from backups. That means ineffective storage utilization and inability to have longer data retention as you could have with flexible backup policy as GFS for instance, not to mention additional spendings on storage.

Another problem of VSS is that it supports only 64 snapshots. Basically, that means if you exclude weekends from your backup policy, you will be able to have backups for 89 day period. It’s clearly not enough if, for example, you work in a financial institution where you have strict policies of long data retention. DPM assumes that you will use tapes for prolonged data retention. If you already have tapes then you are good to go, if not then once again it’s additional expenditures.

Interestingly enough in DPM you cannot have different retention policies for data which resides on the same volume. Say I want to keep database backups for 3 months and transaction logs for the last week. If database backups and transaction logs reside on the same volume then you will have to create the second volume and separate them.

Limitation which I personally find very inconvenient is space reservation. Each time you create a Protection Group you reserve space for it. Say 500GB. And you cannot change it. In case one folder from ten, which you backup from the server, moves to another place and 250GB become free, the only option you have is to destroy Protection Group, loose all backups and recreate it. DPM helps you in situation when you don’t know how your data will grow and you can specify smaller storage size initially and it will automatically grow as needed. However, it can extend only 32 times. If you hit that limit, then you are in the same situation as before.

Another major issue arise when you change protected server name. If server name is changed the only option you have is destroying protection group, loosing all backups and recreating it.

The next limitation I also find inflexible is target storage. In DPM you can only use blocked devices as target storage to keep your backups. So it’s either DAS, FC or iSCSI storage. NAS is not supported.

If you work in SMB then you would probably have issues with installation and support of legacy systems. DPM works only on 64-bit Windows Server 2008 platforms (Windows Server 2003 is not supported). DPM doesn’t support Bare Metal Recovery (BMR) of Windows Server 2003.

And lastly, DPM keeps all data on a raw volume. Raw volumes are more efficient in terms of disk I/O performance. But when it’s helpful for DBMS it doesn’t seem to make any difference for backup software. The downside of it is higher risk of loosing all data in case of volume damage or DPM bug. It’s arguable, so I will leave it as my personal opinion.

In conclusion, it’s rather disappointing to see how software with 7 years history (DPM 2006 was released in 2005) has more limitations than any backup software solution I can think of. Even if you don’t have enough money for Symantec Backup Exec, ARCServe Backup, HP Data Protector or any other software, I would recommend to make more effort and search for some other solution. Otherwise you can fall into the same trap as we did.