Posts Tagged ‘configuration’

AWS Cloud Protection Manager Part 2: Configuration

August 14, 2017

Overview

As we discussed in Part 1 of this series, snapshots serve as a good basis to implement backup in AWS. But AWS does not provide an out-of-the-box tool that can manage snapshots at scale and perform snapshot creation/deletion based on a defined retention. Rich AWS APIs allow you to build such tool yourself or you can use an existing backup solution built for AWS. In this blog post we are looking at one such product, called Cloud Protection Manager.

You will be surprised to know that the first version of Cloud Protection Manager was released back in 2013. The product has matured over the years and the current CPM version 2.1 according to N2W web-site has become quite popular amongst AWS customers.

CPM is offered in four different versions: Standard, Advanced, Enterprise and Enterprise Plus. Functionality across all four versions is mostly the same, with the key difference being the number of instances you can backup. Ranging from 20 instances in Standard and $5 per instance in Enterprise Plus.

Installation

CPM offers a very straightforward consumption model. You purchase it from AWS marketplace and pay by the month. Licensing costs are billed directly to your account. There are no additional steps involved.

To install CPM you need to find the version you want to purchase in AWS Marketplace, specify instance settings, such as region, VPC subnet, security group, then accept the terms and click launch. AWS will spin up a new CPM server as an EC2 instance for you. You also have an option to run a 30-day trial if you want to play with the product before making a purchasing decision.

Note that CPM needs to be able to talk to AWS API endpoints to perform snapshots, so make sure that the appliance has Internet access by means of a public IP address, Elastic IP address or a NAT gateway. Similarly, the security group you attach it to should at least have HTTPS out allowed.

Initial Configuration

Appliance is then configured using an initial setup wizard. Find out what private IP address has been assigned to the instance and open a browser session to it. The wizard is reasonably straightforward, but there are two things I want to draw your attention to.

You will be asked to create a data volume. This volume is needed purely to keep CPM configuration and metadata. Backups are kept in S3 and do not use this volume. The default size is 5GB, which is enough for roughly 50 instances. If you have a bigger environment allocate 1GB per every 10 AWS instances.

You will also need to specify AWS credentials for CPM to be able to talk to AWS APIs. You can use your AWS account, but this is not a security best practice. In AWS you can assign a role to an EC2 instance, which is what you should be using for CPM. You will need to create IAM policies that essentially describe permissions for CPM to create backups, perform restores, send notifications via AWS SNS and configure EC2 instances. Just refer to CPM documentation, copy and paste configuration for all policies, create a role and specify the role in the setup wizard.

Backup Policies

Once you are finished with the initial wizard you will be able to log in to the appliance using the password you specified during installation. As in most backup solutions you start with backup policies, which allow you to specify backup targets, schedule and retention.

One thing that I want to touch on here is backup schedules, that may be a bit confusing at start. It will be easier to explain it in an example. Say you want to implement a commonly used GFS backup schedule, with 7 daily, 4 weekly, 12 monthly and 7 yearly backups. Daily backup should run every day at 8pm and start from today. Weekly backups run on Sundays.

This is how you would configure such schedule in CPM:

  • Daily
    • Repeats Every: 1 Days
    • Start Time: Today Date, 20:00
    • Enabled on: Mon-Sat
  • Weekly
    • Repeats Every: 1 Weeks
    • Start Time: Next Sunday, 20:00
    • Enabled on: Mon-Sun
  • Monthly
    • Repeats Every: 1 Months
    • Start Time: 28th of this month, 21:00
    • Enabled on: Mon-Sun
  • Yearly:
    • Repeats Every: 12 Months
    • Start Time: 31st of December, 22:00
    • Enabled on: Mon-Sun

Some of the gotchas here:

  • “Enabled on” setting is relevant only to the Daily backup, the rest of the schedules are based on the date you specify in “Start Time” field. For instance, if you specify a date in the Weekly backup Start Time that is a Sunday, your weekly backups will run every Sunday.
  • Make sure to run your Monthly backup on 28th day of every month, to guarantee you have a backup every month, including February.
  • It’s not possible to prevent Weekly backup to not run on the last week of every month. So make sure to adjust the Start Time for the Monthly backup so that Weekly and Monthly backups don’t run at the same time if they happen to fall on the same day.
  • Same considerations are true for the Yearly backup as well.

Then you create your daily, weekly, monthly and yearly backup policies using the corresponding schedules and add EC2 instances that require protection to every policy. Retention is also specified at the policy level. According to our scenario we will have 6 generations for Daily, 4 generations for Weekly, 12 generations for Monthly and 7 generations for Yearly.

Notifications

CPM uses AWS Simple Notification Service (SNS) to send email alerts. If you gave CPM instance SNS permissions in IAM role you created previously, you should be able to simply go to Notification settings, enable Alerts and select “Create new topic” and “Add user email as recipient” options. CPM will create a SNS topic in AWS for you automatically and use email address you specified in the setup wizard to send notifications to. You can change or add more email addresses to the SNS topic in AWS console later on if you need to.

Conclusion

This is all you need to get your Cloud Protection Manager up and running. In the next blog post we will look at how instances are backed up and restored and discuss some of the advanced backup options CPM offers.

Advertisement

Brocade 300 Initial Setup

December 8, 2015

There are a few steps you need to do on the Brocades before moving on to cabling and zoning. The process is pretty straightforward, but worth documenting especially for those who are doing it for the first time.

After you power on the switch there are two ways of setting it up: GUI or CLI. We’ll go hardcore and do all configuration in CLI, but if you wish you can assign a static IP to your laptop from the 10.70.70.0/24 subnet and browse to https://10.77.77.77. Default credentials are admin/password for both GUI and CLI.

Network Settings

To configure network settings, such as a hostname, management IP address, DNS and NTP use the following commands:

> switchname PRODFCSW01
> ipaddrset
> dnsconfig
> tsclockserver 10.10.10.1

Most of these commands are interactive and ask for parameters. The only caveat is, if you have multiple switches under the same fabric, make sure to set NTP server to LOCL on all subordinate switches. It instructs them to synchronize their time with the principal switch.

Firmware Upgrade

This is the fun part. You can upgrade switch firmware using a USB stick, but the most common way is to upgrade using FTP. This obviously means that you need to install a FTP server. You can use FileZilla FTP server, which is decent and free.

Download the server and the client parts and install both. Default settings work just fine. Go to Edit > Users and add an anonymous user. Give it a home folder and unpack downloaded firmware into it. This is what it should look like:

filezilla.JPG

To upgrade firmware run the following command on the switch, which is also interactive and then reboot:

> firmwaredownload -s

If you’re running a Fabric OS revision older than 7.0.x, such as 6.3.x or 6.4.x, then you will need to upgrade to version 7.0.x first and then to your target version, such as 7.3.x or 7.4.x.

In the next blog post I will discuss firmware upgrades in more detail, such as how to do a non-disruptive upgrade on a production switch and where to download vendor-specific FOS firmware from.

Limiting the number of concurrent storage vMotions

June 6, 2013

vmw-dgrm-vsphr-087b-diagram1VMware vCenter allows several concurrent storage vMotions on a datastore. But it can negatively impact your production environment, by hammering your underlying storage. If you want to migrate several virtual machines to another datastore, it’s much safer to do that one by one. But it’s too much manual work.

There is a simple way to limit the number of concurrent storage vMotions by configuring vCenter advanced settings. There are a group of resource management parameters for network, host and datastore limits which apply to vMotion and Storage vMotion. They are called limits and costs. For ESXi 4.1 default datastore limit for migration with Storage vMotion is 128. And datastore resource cost for Storage vMotion is 16 (defaults for other versions of ESXi can be found here: Limits on Simultaneous Migrations). It basically means that 8 concurrent storage vMotions is allowed for each datastore. So to allow only one storage vMotion at a time you can either change the limit to 16 or cost to 128.

Lets say we choose to change the cost to 128. There are two ways of doing it. The first one is to edit vCenter vpxd.cfg file and add the following stanza between <vpxd></vpxd> tags:

<ResourceManager>
<CostPerEsx41SVmotion>128</CostPerEsx41SVmotion>
</ResourceManager>

The second simpler one way is to edit vCenter -> Administration -> vCenter Server Settings -> Advanced Settings and add config.vpxd.ResourceManager.CostPerEsx41SVmotion key with value equal to 128. You will probably need to reboot vCenter after that.

There is one moment, however. If you migrate VMs from say 3 source datastores to 1 destination, then 3 concurrent storage vMotion will kick off. I do not know what is the reason for that, but that’s what I found from the practice.

NetApp SSH Connection Times Out

May 31, 2013

PuTTYPortable_128There is one tricky thing about SSH connections to NetApp filers. If you use PuTTY or PuTTY Connection Manager and you experience frequent timeouts from ssh sessions, you might need to fiddle around with PuTTY configuration options. It seems that there is some issue with how Data ONTAP implements SSH key exchanges, which results in frequent annoying disconnections.

In order to fix that, on PuTTY Configuration screen go to Connection -> SSH -> Bugs and change “Handles SSH2 key re-exchange badly” to ‘On’. That should fix it.

NetApp thin provisioning for VMware

March 15, 2012

Thin provisioning is a popular buzzword, especially when it comes to NetApp. However, it can really save you time and headache in a number of situations. We use thin provisioning while presenting NFS volumes from NetApp to VI3 ESX hosts. Well, NetApp already let you change size of its FlexVol volumes on the fly. But you need to do it manually. Thin provisioning helps you to configure volumes so that in case of space shortage on a volume it will automatically expand without manual intervention. Of course you need to look after your volumes, otherwise they can fill all your storage space. But it will save you enough time to resolve data growth problem. Without thin provisioning in such situation your applications can easily crash.

NetApp doesn’t support iSCSI thin provisioning for VMware, so NFS is the only option. Don’t be afraid of performance issues. Without a doubt it’s slower than FC, but NetApp is famous for its NFS performance and it’s very well suited for mid-level workloads.

To be more specific, using thin provisioning you can create say 300GB virtual hard drive for particular VM and it will initially use no space. Then it will grow as long as you fill it. It can save you tremendous amount of storage space. Because you never exactly know ahead how much space you need. But be aware, if you will try to migrate thin provisioned virtual hard drive using storage migration plugin for VMware Virtual Center then it will fill all space. It means 300GB will use all 300GB even if it’s half-full.

The best article which will help you to integrate NetApp with VMware VI3 is NetApp TR-3428: NetApp and VMware Virtual Infrastructure 3 Storage Best Practices. What I will write here are basically excerpts from this article.

NetApp Configuration

Lets start from the NetApp configuration. First thing to do is to disable snapshots as usual. Generally it’s not a good idea to make snapshots of VMware virtual hard drives on the fly. They won’t be consistent. I will touch this topic in my later posts.

> snap sched <vol-name> 0 0 0
> snap reserve <vol-name> 0

Next step is to disable access time update on the volume, which is safe because VMware doesn’t rely on accurate access time for its files. It will increase performance, since Filer won’t need to update access time for files each time they are read or written.

> vol options <vol-name> no_atime_update on

Then configure the thin provisioning feature itself by switching volume auto size policy to on. It has two keys -m and -i. By -m you set maximum volume size and by -i you configure increment size.

> vol autosize <vol-name> [-m <size>[k|m|g|t]] [-i <size>[k|m|g|t]] on

NetApp recommends to disable Fractional Reserve for thin provisioned volumes, it’s just not needed anymore. Fractional Reserve guarantees successful writes to volumes in case you use snapshots. According to how snapshots work if you completely overwrite snapshot data you will use double amount of storage space. And it’s where Fractional Reserve comes into place. It reserves 100% of additional space for such cases. It means you will never run into situation when you are out of space due to active snapshots. But since we enabled auto size, our volume will resize on demand and Fractional Reserve becomes redundant. Supposedly auto size was implemented little bit later than Fractional Reserve and we have both of them in NetApp.

> vol options <vol-name> fractional_reserve 0

In case you use snapshots as a tool for instant VMware block level backups you can change auto delete policy. I said earlier that you should disable snapshot schedule, however you can manually (using scripts) create consistent snapshots. If you want to do that then you can additionally instruct NetApp to delete oldest snapshots when you are out of space on Filer and can’t auto grow volume.

> snap autodelete <vol-name> commitment try trigger volume target_free_space 5 delete_order oldest_first
> vol options <vol-name> try_first volume_grow

Now we need to create NFS export on NetApp Filer. It’s where FilerView interface comes handy. In short, you should give your ESX hosts read-write access, root access and configure Unix security style.

VMware Configuration

VMware configuration is trivial. Go to VMware Add Storage Wizard, select Network File System, then point to your NetApp filer and specify your volume path. Additionally NetApp recommends to tune NFS heartbeat parameters. Go to Host Configuration – Advanced Settings – NFS and for ESX 3.0 hosts change:

NFS.HeartbeatFrequency to 5 from 9
NFS.HeartbeatMaxFailures to 25 from 3

For ESX 3.5 hosts change:

NFS.HeartbeatFrequency to 12
NFS.HeartbeatMaxFailures to 10

There are much more information and tuning parameters that you might want to read about. Find some time to look through TR-3428 in case you need some clarifications or additional info.