AWS Cloud Protection Manager Part 2: Configuration

August 14, 2017

Overview

As we discussed in Part 1 of this series, snapshots serve as a good basis to implement backup in AWS. But AWS does not provide an out-of-the-box tool that can manage snapshots at scale and perform snapshot creation/deletion based on a defined retention. Rich AWS APIs allow you to build such tool yourself or you can use an existing backup solution built for AWS. In this blog post we are looking at one such product, called Cloud Protection Manager.

You will be surprised to know that the first version of Cloud Protection Manager was released back in 2013. The product has matured over the years and the current CPM version 2.1 according to N2W web-site has become quite popular amongst AWS customers.

CPM is offered in four different versions: Standard, Advanced, Enterprise and Enterprise Plus. Functionality across all four versions is mostly the same, with the key difference being the number of instances you can backup. Ranging from 20 instances in Standard and $5 per instance in Enterprise Plus.

Installation

CPM offers a very straightforward consumption model. You purchase it from AWS marketplace and pay by the month. Licensing costs are billed directly to your account. There are no additional steps involved.

To install CPM you need to find the version you want to purchase in AWS Marketplace, specify instance settings, such as region, VPC subnet, security group, then accept the terms and click launch. AWS will spin up a new CPM server as an EC2 instance for you. You also have an option to run a 30-day trial if you want to play with the product before making a purchasing decision.

Note that CPM needs to be able to talk to AWS API endpoints to perform snapshots, so make sure that the appliance has Internet access by means of a public IP address, Elastic IP address or a NAT gateway. Similarly, the security group you attach it to should at least have HTTPS out allowed.

Initial Configuration

Appliance is then configured using an initial setup wizard. Find out what private IP address has been assigned to the instance and open a browser session to it. The wizard is reasonably straightforward, but there are two things I want to draw your attention to.

You will be asked to create a data volume. This volume is needed purely to keep CPM configuration and metadata. Backups are kept in S3 and do not use this volume. The default size is 5GB, which is enough for roughly 50 instances. If you have a bigger environment allocate 1GB per every 10 AWS instances.

You will also need to specify AWS credentials for CPM to be able to talk to AWS APIs. You can use your AWS account, but this is not a security best practice. In AWS you can assign a role to an EC2 instance, which is what you should be using for CPM. You will need to create IAM policies that essentially describe permissions for CPM to create backups, perform restores, send notifications via AWS SNS and configure EC2 instances. Just refer to CPM documentation, copy and paste configuration for all policies, create a role and specify the role in the setup wizard.

Backup Policies

Once you are finished with the initial wizard you will be able to log in to the appliance using the password you specified during installation. As in most backup solutions you start with backup policies, which allow you to specify backup targets, schedule and retention.

One thing that I want to touch on here is backup schedules, that may be a bit confusing at start. It will be easier to explain it in an example. Say you want to implement a commonly used GFS backup schedule, with 7 daily, 4 weekly, 12 monthly and 7 yearly backups. Daily backup should run every day at 8pm and start from today. Weekly backups run on Sundays.

This is how you would configure such schedule in CPM:

  • Daily
    • Repeats Every: 1 Days
    • Start Time: Today Date, 20:00
    • Enabled on: Mon-Sat
  • Weekly
    • Repeats Every: 1 Weeks
    • Start Time: Next Sunday, 20:00
    • Enabled on: Mon-Sun
  • Monthly
    • Repeats Every: 1 Months
    • Start Time: 28th of this month, 21:00
    • Enabled on: Mon-Sun
  • Yearly:
    • Repeats Every: 12 Months
    • Start Time: 31st of December, 22:00
    • Enabled on: Mon-Sun

Some of the gotchas here:

  • “Enabled on” setting is relevant only to the Daily backup, the rest of the schedules are based on the date you specify in “Start Time” field. For instance, if you specify a date in the Weekly backup Start Time that is a Sunday, your weekly backups will run every Sunday.
  • Make sure to run your Monthly backup on 28th day of every month, to guarantee you have a backup every month, including February.
  • It’s not possible to prevent Weekly backup to not run on the last week of every month. So make sure to adjust the Start Time for the Monthly backup so that Weekly and Monthly backups don’t run at the same time if they happen to fall on the same day.
  • Same considerations are true for the Yearly backup as well.

Then you create your daily, weekly, monthly and yearly backup policies using the corresponding schedules and add EC2 instances that require protection to every policy. Retention is also specified at the policy level. According to our scenario we will have 6 generations for Daily, 4 generations for Weekly, 12 generations for Monthly and 7 generations for Yearly.

Notifications

CPM uses AWS Simple Notification Service (SNS) to send email alerts. If you gave CPM instance SNS permissions in IAM role you created previously, you should be able to simply go to Notification settings, enable Alerts and select “Create new topic” and “Add user email as recipient” options. CPM will create a SNS topic in AWS for you automatically and use email address you specified in the setup wizard to send notifications to. You can change or add more email addresses to the SNS topic in AWS console later on if you need to.

Conclusion

This is all you need to get your Cloud Protection Manager up and running. In the next blog post we will look at how instances are backed up and restored and discuss some of the advanced backup options CPM offers.

AWS Cloud Protection Manager Part 1: Intro to Snapshots

August 7, 2017

Cloud computing conceptOverview

We have all been dealing with on-premises infrastructure for years before public cloud became a thing. Depending on complexity and size of your environment, retention and other requirements you can choose from plethora of products, such as Veeam, Commvault, NetWorker, you name it.

When public cloud started getting more traction we realised that the paradigm has shifted. Public cloud providers are pushing us to treat instances of application servers as cattle versus pets and build fault tolerance into code and not rely on underlying infrastructure reliability.

If application is no longer monolithic and is distributed among multiple instances of the same server, loss of any one instance has little overall impact on the application and does not require a restore. We can simply spin up a new instance instead.

However, majority of applications in today’s IT environments are still monolithic and need to be backed up. Data needs to be protected as well, if it gets corrupted or if for any other reason we need to roll back our application to a certain point in time, backups are necessary.

AWS Snapshots

When I started researching the topic of backup in the cloud I realised that it is still a wild west. Speaking of AWS specifically, if you are using one of the native AWS services, such as RDS, you would usually have the backup feature built-in. Otherwise, if you simply want to backup some application or service running inside an AWS instance, all you have is snapshots.

We have all heard the mantra, that snapshot is not a backup, because usually it is kept in the same place as the original data it was created from. This is true for most of the traditional storage arrays. AWS is slightly different.

Typically when you start an EC2 instance, its disk volumes are kept on Elastic Block Store (EBS), which is an equivalent of block storage in traditional infrastructure world. But when you create a snapshot of an EBS volume, the snapshot is transferred to S3 object storage.

When the first snapshot is created, the whole volume needs to be copied to S3. It may take some time to complete, depending on the volume size, but the copy process happens in the background and doesn’t impact the original EC2 instance. All subsequent snapshots will be significantly quicker, unless you overwrite large amount of data between snapshots.

CloudWatch Event Rules

As we can see, unlike traditional storage arrays, AWS snapshots are kept separately from the original data. They can even be restored to another availability zone or copied to another region to protect from region-wide failures, which makes AWS snapshots a decent backup solution. But devil is in the details.

Creating a volume snapshot manually is simple, you pick “Create Snapshot” from the Actions menu and the rest is handled by AWS. To automate the process, AWS offers CloudWatch event rules for scheduled snapshot creation. By going to CloudWatch > Events > Create Rule, setting up a schedule, selecting “EC2 CreateSnapshot API call” and the volume ID in the Target section you can implement a simple backup in AWS.

cloudwatch_snapshot1.jpg

However, there are two immediate problems with such approach. The first is scalability of it. Snapshots in AWS are created on volumes, not instances. This has interesting implications, such as it is not possible to create a consistent snapshot across multiple volumes. But more importantly, if you have more than one volume on every instance, you may end up having dozens of CloudWatch rules, that are hard to manage and keep track of.

cloudwatch_snapshot2.jpg

Second is retention. CloudWatch events let you create snapshots, but you cannot specify how long you want to keep them for. You will need to write your own scripts using AWS APIs to delete snapshots, which makes the whole solution questionable, as you can then use APIs to create snapshots as well.

Conclusion

As you can see, AWS does not offer a simple out of the box solution for EC2 backup. You either need to write your own scripts using AWS APIs or lean on backup solutions built specifically to solve the problem of backups in AWS. One of such solutions Cloud Protection Manager we will discuss in the next blog post of the series.

First Look at AWS Management Portal for vCenter Part 2: Administration

June 30, 2017

aws_migrationIn part 1 of the series we looked at the Management Portal deployment. Let’s move on to an overview of the portal functionality.

Portal Dashboard

Once you open the portal you are asked to pick your region (region preferences can later be changed only from Web Client). You then proceed to the dashboard where you can see all instances you already have running in AWS. If you don’t see your VPCs, make sure the user you’re using to log in is on the list of administrators in AMP (user and domain names are case sensitive).

default_env

Here you can find detailed configuration information of each instance (Summary page), performance metrics (pulled from CloudWatch) and do some simple tasks, such as stopping/rebooting/terminating an instance, creating an AMI (Amazon Machine Image). You can also generate a Windows password from a key pair if you need to connect to VM via RDP or SSH.

Virtual Private Cloud Configuration

If the dashboard tab is more operational-focused, VPC tab is configuration-centric. Here you can create new VPCs, subnets and security groups. This can be handy if you want to add a rule to a security group to for instance allow RDP access to AWS instances from a certain IP.

edit_sg

If you spend most of the time in vCenter this can be helpful as you don’t need to go to AWS console every time to perform such simple day to day tasks.

Virtual Machine Provisioning

Portal supports simple instance provisioning from Amazon Machine Images (AMIs). You start with creating an environment (Default Environment can’t be used to deploy new instances). Then you create a template, where you can pick an AMI and specify configuration options, such as instance type, subnets and security groups.

create_template

Note: when creating a template, make sure to search for AMIs by AMI ID. AMI IDs in quick start list are not up-to-date and will cause instance deployment to fail with the following error:

Failed to launch instance due to EC2 error: The specified AMI is no longer available or you are not authorized to use it.

You can then go ahead and deploy an instance from a template.

Virtual Machine Migration

Saving the best for the last. VM migration – this is probably one of the coolest portal features. Right-click on a VM in vCenter inventory and select Migrate to EC2. You will be asked where you want to place the VM and how AWS instance should be configured.

ec2_migrate

When you hit the button AMP will first export VM as an OVF image and then upload the image to AWS. As a result, you get a copy of your VM in AWS VPC with minimal effort.

ec2_migration2

When it comes to VM migration to AWS, there is, of course, much more to it than just copying the data. Machine gets a new SID, which not all applications and services like. There are compatibility considerations, data gravity, network connectivity and others. But all the heavy lifting AMP does for you.

Conclusion

I can’t say that I was overly impressed with the tool, it’s very basic and somewhat limited. Security Groups can be created, but cannot be applied to running instances. Similarly, templates can be created, but not edited.

But I would still recommend to give it a go. Maybe you will find it useful in your day to day operations. It gives you visibility into your AWS environment, saving time jumping between two management consoles. And don’t underestimate the migration feature. Where other vendors ask for a premium, AWS Management Portal for vCenter gives it to you for free.

First Look at AWS Management Portal for vCenter Part 1: Deployment

December 18, 2016

Cloud has been a hot topic in IT for quite a while, for such valid reasons and benefits it brings as agility and economies of scale. More and more customers start to embark on the cloud journey, whether it’s DR to cloud, using cloud as a Tier 3 storage or even full production migrations for the purpose of shrinking the physical data center footprint.

vmware_aws

Even though full data center migrations to cloud are not that uncommon, many customers use cloud for certain use cases and keep other more static workloads on-premises, where it may be more cost-effective. What it means is that they end up having two environments, that they have to manage separately. This introduces complexity into operational models as each environment has its own management tools.

Overview

AWS Management Portal for vCenter helps to bridge this gap by connecting your on-premises vSphere environment to AWS and letting you perform basic management tasks, such as creating VPCs and security groups, deploying EC2 instances from AMI templates and even migrating VMs from vSphere to cloud, all without leaving the familiar vCenter user interface.

connector_architecture

Solution consists of two components: AWS Management Portal for vCenter, which is configured in AWS and AWS Connector for vCenter, which is a Linux appliance deployed on-prem. Let’s start with the management portal first.

Configure Management Portal

AWS Management Portal for vCenter or simply AMP, can be accessed by the following link https://amp.aws.amazon.com. Configuration is wizard-based and its main purpose is to set up authentication for vCenter users to be able to access AWS cloud through the portal.

aws_amp.jpg

You have an option of either using SAML, which has pre-requisites, or simply choosing the connector to be your authentication provider, which is the easiest option.

If you choose the latter, you will need to pre-configure a trust relationship between AWS Connector and the portal. First step of the process is to create an Identity and Access Management (IAM) user in AWS Management Console and assign “AWSConnector” IAM policy to it (connector will then use this account to authenticate to AWS). This step is explained in detail in Option 1: Federation Authentication Proxy section of the AWS Management Portal for vCenter User Guide.

add_admin

You will also be asked to specify vCenter accounts that will have access to AWS and to generate an AMP-Connector Key. Save your IAM account Access Key / Secret and AMP-Connector Key. You will need them in AWS Connector registration wizard.

Configure AWS Connector

AWS Connector is distributed as an OVA, which you can download here:

To assign a static IP address to the appliance you will need to open VM console and log in as ec2-user with the password ec2pass. Run the setup script and change network settings as desired. Connector also supports connecting to AWS through a proxy if required.

# sudo setup.rb

Browse to the appliance IP address to link AWS Connector to your vCenter and set up appliance’s password. You will then be presented with the registration wizard.

Wizard will ask you to provide a service account for AWS Connector (create a non-privileged domain account for it) and credentials of the IAM trust account you created previously. You will also need your trust role’s ARN (not user’s ARN) which you can get from the AMP-Connector Federation Proxy section of AWS Management Portal for vCenter setup page.

If everything is done correctly, you will get to the plug-in registration page with the configuration summary, which will look similar to this:

registration_complete

Summary

AWS Connector will register a vCenter plug-in, which you will see both in vSphere client and Web Client.

aws_wclient

That completes the deployment part. In the next blog post of the series we will talk in more detail on how AWS Management Portal can be leveraged to manage VPCs and EC2 instances.

Puppet Camp 2016 Recap

December 4, 2016

puppet-campLast week I had a chance to attend Puppet Camp 2016. Puppet Camp is a one day event that is held once a year in many places around the world including Australia. This time it was the fourth Melbourne conference, which gathered 240 attendees and several key partners, such as NetApp, Diaxion and Katana1.

In this blog post I want to give a quick overview of the keynote, customer and partner sessions, as well as my key takeaways from the conference.

First Impressions

I’ve never been to Puppet Camp before and this was my first experience. Sheer number of participants clearly shows that areas of configuration management and DevOps in general attracts a lot of attention of both customers and channel.

imag5067_2

You may have heard how Cisco in Q3 of 2015 announced Puppet support for the Nexus 3000 and 9000 series switches. This was not just an accident. I had a chance to speak to NetApp, who was one of the vendors presented at the conference, and they now have Puppet integration with their Data ONTAP / FAS platform, as well as E-Series and recently acquired SolidFire line of storage arrays. I’m sure many other hardware vendors will follow.

Keynote and Puppet Update

The conference had one track of sessions spread out throughout the day and was opened by a keynote from Robert Finn, APJ Sales Director at Puppet, who was talking about the raising complexity of modern IT environments and challenges that come with it. We have gone from tens of servers to hundreds of VMs and are now on the verge of the next evolution from hundreds of VMs to thousands of containers. We can no longer manage environments manually and that is where tools such as Puppet come into play and let us manage configuration and provisioning at scale.

Rob also mentioned the “State of DevOps Report” an annual survey Puppet has been running now for five years in a row. In 2016 they collected responses from 4600 technical professionals and shared a lot of their findings in a public report, which I’ll link in the references section below.

state_of_devops

Key takeaways: introducing configuration management in their software development practices organizations were able to achieve 3x lower change failure rate and 24x faster recovery from failures.

Ronny Sabapathee, Puppet Solutions Engineer gave an overview of the new features in the latest Puppet Enterprise 2016.4, such as corrective change reporting, changes to Puppet Orchestrator, enhancements in Code Manager and API improvements.

Key takeaways: Puppet ecosystem is growing quickly with Docker module, Jenkins plugin, significant enhancements in Azure module and VMware vRealize Automation/Orchestrator integration coming soon.

Customer Sessions

Rob Kenefik from SpecSavers spoke about their journey of scaling free version of Puppet from 10 to 290 nodes, what issues they came across and what adjustments they had to make, especially around the DB back-end.

Key takeaways: don’t use embedded Puppet database for production deployments. PostgreSQL (which is now default) provides required scalability.

Steve Curtis from ANZ briefly discussed how they automated deployment of Application Performance Monitoring (APM) agents using Puppet. Steve also has a post in Puppet blog, which I’ll link below.

Chris Harwood from Healthdirect Australia touched on a sensitive topic of organizational silos and how teams become too focused on their own performance forgetting about the customers, who should be the key priority for businesses offering customer-facing services.

Then he showed how Healthdirect moved some of the ops people to development teams giving devs access to infrastructure and making them autonomous, which significantly improved their development workflows and release frequency.

Key takeaways: DevOps key challenges are around people and processes, not technology. Teams not collaborating and lengthy infrastructure change management processes can significantly hinder development teams’ performance.

Partner Sessions

Dinesh Siriwardhane who represented Versent compared pros and cons of master/agent vs. masterless Puppet deployments and showed a demo on Puppet certificate management.

Key takeaways: Puppet master simplifies centralized management, provides reporting capabilities, but can be a single point of failure. Agentless deployment using GitHub has no single points of failure and is free, but can have major security repercussions if Git repository is compromised.

imag5085_1

Kieran Sweet and Pedram Sanayei from Sourced made a presentation on Puppet integration with Azure and how using Puppet instead of just the low-level Azure APIs and PowerShell, can significantly simplify deployment and configuration management in the Microsoft cloud.

Key takeaways: Azure Resource Manager is a big step forward from the old Azure Service Management (classic deployment model). In light of the significant recent enhancements in the Azure Puppet module, this can become a reasonable alternative to AWS.

Scott Coulton from Autopilot closed the conference with a session on Puppet integration with Docker and more specifically around container orchestration tools, such as Docker Swarm, Kubernetes, Mesos and Flocker. Be sure to check Scott’s blog and GitHub repository where you can find a Puppet module for Docker Swarm, Vagrant template and more.

Key takeaways: Docker can be used to deploy containers, but Puppet is still essential to keep configuration across the hosts consistent.

Conclusion

I spoke to a lot of customers at the conference and what became apparent to me was that Puppet is not just another DevOps tool amongst the many. It has a wide ecosystem of partners and has gone a long way since they started as a small start-up 12 years ago in 2005.

It has a strong use case for general configuration management in Linux environments, as well as providing application configuration consistency as part of CI/CD pipelines.

Speaking of the conference itself I was pleasantly surprised by the quality of sessions and organization in general. Puppet Camp will definitely stay on my radar. I’d love to come back next year and geek out with the DevOps crowd again.

References

Dell Force10 Part 3: VLT Domain Configuration

July 31, 2016

dell-force10In my previous post here I went through VLT basics and how it helps to establish a loop-free network topology in a modern datacenter. Now lets dive deeper and see how VLT is configured from FTOS CLI.

VLT Configuration

The first step is to configure the backup links and VLT interconnect. Dell S4048-ON switches have six 40Gb QSFP+ ports, two of which 1/49 and 1/50 we will use for VLTi. Repeat the same configuration on both switches.

# int range fo 1/49-1/50
# no shutdown

# interface port-channel 127
# description “VLT interconnect”
# channel-member fo 1/49
# channel-member fo 1/50
# no shutdown

Now that we have a VLT interconnect set up, let’s join the first switch to a VLT domain:

# vlt domain 1
# back-up destination 172.10.10.11
# peer-link port-channel 127
# primary-priority 1

First switch points to the second switch management IP for a backup destination, uses port channel 127 as a VLT interconnect and becomes a primary peer, because it’s given the lowest priority of 1.

Do the same on the second switch, but now point to the first switch management IP for backup and use the highest priority to make this switch a secondary peer:

# vlt domain 1
# back-up destination 172.10.10.10
# peer-link port-channel 127
# primary-priority 8192

To confirm the VLT state use the following command:

# sh vlt brief

vlt_brief

As you can see, the VLTi and backup links are up and the switch can see its peer. For some additional VLT specific information use these commands:

# sh vlt statistics
# sh vlt backup-link

I would also recommend to use the following command to see the port channel state and confirm that both VLTi links are in up state:

# sh int po127

po_state

Conclusion

In this part of the Dell Force10 switch configuration series we quickly went through the initial VLT setup. We haven’t touched on VLT LAG configuration yet. We will take a closer look at it in the next blog post.

Dell Force10 Part 2: VLT Basics

July 10, 2016

dell-force10Last time I made a blog post on initial configuration of Force10 switches, which you can find here. There I talked about firmware upgrade and basic features, such as STP and Flow Control. In this blog post I would like to touch on such a key feature of Force10 switches as Virtual Link Trunking (VLT).

VLT is Force10’s implementation of Multi-Chassis Link Aggregation Group (MLAG), which is similar to Virtual Port Channels (vPC) on Cisco Nexus switches. The goal of VLT is to let you establish one aggregated link to two physical network switches in a loop-free topology. As opposed to two standalone switches, where this is not possible.

You could say that switch stacking gives you similar capabilities and you would  be right. The issue with stacked switches, though, is that they act as a single switch not only from the data plane point of view, but also from the control plane point of view. The implication of this is that if you need to upgrade a switch stack, you have to reboot both switches at the same time, which brings down your network. If you have an iSCSI or NFS storage array connected to the stack, this may cause trouble, especially in enterprise environments.

With VLT you also have one data plane, but individual control planes. As a result, each switch can be managed and upgraded separately without full network downtime.

VLT Terminology

Virtual Link Trunking uses the following set of terms:

  • VLT peer – one of the two switches participating in VLT (you can have a maximum of two switches in a VLT domain)
  • VLT interconnect (VLTi) – interconnect link between the two switches to synchronize the MAC address tables and other VLT-related data
  • VLT backup link – heartbeat link to send keep alive messages between the two switches, it’s also used to identify switch state if VLTi link fails
  • VLT – this is the name of the feature – Virtual Link Trunking, as well as a VLT link aggregation group – Virtual Link Trunk. We will call aggregated link a VLT LAG to avoid ambiguity.
  • VLT domain – grouping of all of the above

VLT Topology

This’s what a sample VLT domain looks like. S4048-ON switches have six 40Gb QSFP+ ports, two of which we use for a VLT interconnect. It’s recommended to use a static LAG for VLTi.

basic_vlt

Two 1Gb links are used for VLT backup. You can use switch out-of-band management ports for this. Four 10Gb links form a VLT LAG to the upstream core switch.

Use Cases

So where is this actually helpful? Vast majority of today’s environments are virtualized and do not require LAGs. vSphere already uses teaming on vSwitch uplinks for traffic distribution across all network ports by default. There are some use cases in VMware environments, where you can create a LAG to a vSphere Distributed Switch for faster link failure convergence or improved packet switching. Unless you have a really large vSphere environment this is generally not required, but you may use this option later on if required. Read Chris Wahl’s blog post here for more info.

Where VLT is really helpful is in building a loop-free network topology in your datacenter. See, all your vSphere hosts are connected to both Force10 switches for redundancy. Since traffic comes to either of the switches depending on which uplink is being picked on a ESXi host, you have to make sure that VMs on switch 1 are able to communicate to VMs on switch 2. If all you had in your environment were two Force10 switches, you would establish a LAG between the two and be done with it. But if your network topology is a bit larger than this and you have at least a single additional core switch/router in your environment you’d be faced with the following dilemma. How can you ensure efficient traffic switching in your network without creating loops?

stp_loop

You can no longer create a LAG between the two Force10 switches, as it will create a loop. Your only option is to keep switches connected only to the core and not to each other. And by doing that you will cause all traffic from VMs on switch 1 destined to VMs on switch 2 and vise versa to traverse the core.

east_west_traffic

And that’s where VLT comes into play. All east-west traffic between servers is contained within the VLT domain and doesn’t need to traverse the core. As shown above, if we didn’t use VLT, traffic from one switch to another would have to go from switch 1 to core and then back from core to switch 2. In a VLT domain traffic between the switches goes directly form switch 1 to switch 2 using VLTi.

Conclusion

That’s a brief introduction to VLT theory. In the next few posts we will look at how exactly VLT is configured and map theory to practice.

Dell Force10 Part 1: Initial Configuration

July 3, 2016

force10_S4048_on
When it comes to networking Dell has two main series of switches. PowerConnect/N-series, which run DNOS 6.x operating system. And S/Z-series switches, which run on DNOS 9.x derived from Force10 OS (FTOS). In this series of blogs we will go through the configuration of Force10 switch series and use Dell S4048-ON top of the rack switch as an example.

Interesting to note, that unlike other S-series switches S4048-ON is an Open Networking switch. Dell is one of the first companies which apart from its own OS lets customers run other operating systems on its network switches, such as Cumulus Linux OS and Big Switch Networks Switch Light OS. While Cumulus and Big Switch has its own use cases, in this blog we will look specifically at configuring FTOS.

Boot process

S4048-ON comes from the factory pre-configured for bare metal provisioning (BMP). This is what you will see when you boot the switch for the first time:

s4048_bmp

If you just want to boot FTOS, simply skip the BMP by choosing A and switch will boot the OS.

After some time BMP will time out. If you’ve missed the above wizard, you can also disable BMP from CLI using the following commands:

> enable
# stop bmp
# config
# reload-type normal-reload
# exit
# reload

When prompted choose to save the configuration and proceed with reload. After the switch has rebooted check that the next boot is set to normal reload:

# show reload-type

Initial configuration

First steps of any switch installation is assigning a hostname and management interface settings:

# hostname DELL4048-SWITCH
# int managementethernet 1/1
# ip address 172.10.10.2/24
# no shut
# management route 0.0.0.0/0 172.10.10.10

Then set admin / enable passwords and allow remote management via SSH:

# enable password 123456
# username admin password 123456
# ip ssh server enable

Configure time zone and NTP:

# clock timezone UTC 11
# ntp server 172.10.10.20
# show ntp associations
# show ntp status
# show clock

Firmware upgrade

Force10 switches have two boot banks A: and B:. It’s a good practice to upload new firmware into one boot bank and keep the old firmware in the other in case you need to roll back.

The easiest way to upgrade is via TFTP using Tftpd64, which you can download for free from here. If you’re upgrading an existing switch, make sure to save the running config and make a backup. If it’s an initial install you can skip this step.

# copy run start
# copy start tftp://10.0.0.1/FORCE10_SWITCH_01.01.16.conf

Then upload new firmware to image B:, change active boot bank to B: and reload:

# show version
# show boot system stack-unit 1
# upgrade system tftp://10.0.0.1/FTOS-SK-9.9.0.0P9.bin b:
# conf t
# boot system stack-unit 1 primary system b:
# exit
# reload

You will be prompted to save the configuration and reboot. After the reboot you may be asked to enable SupportAssist. SuppotAssist helps to automatically open Dell service tickets if there is a switch fault. You can enable SupportAssist by running the following commands and answering prompts:

supportassist

# conf t
# support-assist activate
# support-assist activity full-transfer start now
# show support-assist status

My pair of switches were configured in a Virtual Link Trunking (VLT) domain. I’ll explain how VLT works later in the series. But from the upgrade point of view, each switch in a VLT domain is treated as a separate switch and has to be upgraded separately. If you decided to use a stack instead of VLT, you can find the upgrade process for a Force10 stack in my other post about Dell MXL switches here.

Spanning tree

Spanning Tree Protocol (STP) helps to prevent network topology loops and is highly recommended for use in any network. Switches connected in an actual loop topology in today’s networks are rare. But STP can save you from consequences of a potential human error, such as port channel misconfiguration. If instead of creating one port channel with two links, you by mistake create two port channels with one link each and both carry the same VLANs, you’ve accidentally created a loop, which will bring your whole network to an immediate halt.

It’s a good practice to enable STP as a safeguard mechanism from such configuration errors. S4048-ON supports STP, RSTP, MSTP and PVST+. In my case S4048s were uplinked into HP core, which supported STP, RSTP and MSTP. If you have Cisco switches in your network core you can use PVST+. In my case I used RSTP, which is a good choice if you don’t require enhancements of MSTP and PVST+ in your network. Just make sure to not use the basic STP protocol, as it provides the slowest convergence.

# protocol spanning-tree rstp
# no disable
# show spanning-tree rstp

In every STP topology there is also a root switch, which by default is selected automatically. For a more deterministic STP behaviour it’s recommended to select the root switch manually, by assigning the lowest STP priority to it. Typically your core switch should be your root switch. In my case it was a HP core switch, which was assigned priority of “0”.

When configuring server and storage facing ports make sure to enable EdgePort mode to minimize the time it takes for the port to come online:

# int range Te1/45-1/48
# spanning-tree rstp edge-port
# switchport
# no shut

If you want to know more about how STP works, you can read a few of my previous blog posts on STP here and here.

Flow control

To avoid dropped packets on 10Gb switch ports at times of potential heavy utilization it is also a best practice to as a minimum enable bi-directional Flow Control on the storage array ports. I enabled it on the iSCSI links connected from the Dell Compellent storage array:

# int range Te1/17-1/18
# flowcontrol rx on tx on

If you specifically interested in switch best practices for Compellent and EqualLogic storage arrays, Dell has a full list of guides for various switches at communitites wiki here.

Port channels and VLANs

Port channels and VLANs are configured similarly to any other switch, but I include them here in case you want to know the syntax. In this example we have two access ports 1/46 and 1/47 and an uplink to the core configured as port channel 1:

# interface port-channel 1
# switchport
# no shutdown

# interface range Te1/1-1/2
# port-channel-protocol LACP
# port-channel 1 mode active
# no shutdown

# int vlan 254
# untagged Te1/46-1/47
# tagged po 1

Keep in mind, that port channels are used either in one switch configurations or when two or more switches are stacked together. If you’re using Virtual Link Trunking (VLT), you will need to create Virtual Link Trunks (VLTs). Which are similar to port channels, but have a slightly different syntax. We will talk about VLT in much more detail in the following Force10 blogs.

Conclusion

One feature which I didn’t specifically mentioned in this blog post was Jumbo Frames. I tend not to use it in my deployments until I see convincing evidence of it making a difference for iSCSI/NFS storage implementations. I did a post about Jumbo Frames long time ago here and hasn’t changed my opinion ever since. Interested to here your thoughts if have a different take on that.

References

vSphere SDRS Design Considerations

June 26, 2016

data storageIf you happen to have your vSphere cluster to be licensed with Enterprise Plus edition, you may be aware of some of the advanced storage management features it includes, such as Storage DRS and Profile-Driven Storage.

These two features work together to let you optimise VM distribution between multiple VMware datastores from capability, capacity and latency perspective, much like DRS does for memory and compute. But they have some interoperability limitations, which I want to discuss in this post.

Datastore Clusters

In simple terms, datastore cluster is a collection of multiple datastores, which can be seen as a single entity from VM provisioning perspective.

datastore_cluster

VMware poses certain requirements for datastore clustes, but in my opinion the most important one is this:

Datastore clusters must contain similar or interchangeable datastores.

In other words, all of the datastores within a datastore cluster should have the same performance properties. You should not mix datastores provisioned on SSD tier with datastores on SAS and SATA tier and vise versa. The reason why is simple. Datastore clusters are used by SDRS to load-balance VMs between the datastores of a datastore cluster. DRS balances VMs based on datastore capacity and I/O latency only and is not storage capability aware. If you had SSD, SAS and SATA datastores all under the same cluster, SDRS would simply move all VMs to SSD-backed datastores, because it has the lowest latency and leave SAS and SATA empty, which makes little sense.

Design Decision 1:

  • If you have several datastores with the same performance characteristics, combine them all in a datastore cluster. Do not mix datastores from different arrays or array storage tiers in one datastore cluster. Datastore clusters is not a storage tiering solution.

Storage DRS

As already mentioned, SDRS is a feature, which when enabled on a datastore cluster level, lets you automatically (or manually) distribute VMs between datastores based on datastore storage utilization and I/O latency basis. VM placement recommendations and datastore maintenance mode are amongst other useful features of SDRS.

storage_drs

Quite often SDRS is perceived as a feature that can work with Profile-Driven Storage to enforce VM Storage Policy compliance. One of the scenarios, that is often brought up is what if there’s a VM with multiple .vmdk disks. Each disk has a certain storage capability. Mistakenly one of the disks has been storage vMotion’ed to a datastore, which does not meet the storage capability requirements. Can SDRS automatically move the disk back to a compliant datastore or notify that VM is not compliant? The answer is – no. SDRS does not take storage capabilities into account and make decisions only based on capacity and latency. This may be implemented in future versions, but is not supported in vSphere 5.

Design Decision 2:

  • Use datastore clusters in conjunction with Storage DRS to get the benefit of VM load-balancing and placement recommendations. SDRS is not storage capability aware and cannot enforce VM Storage Policy compliance.

Profile-Driven Storage

So if SDRS and datastore clusters are not capable of supporting  multiple tiers of storage, then what does? Profile-Driven Storage is aimed exactly for that. You can assign user-defined or system-defined storage capabilities to a datastore and then create a VM Storage Policy and assign it to a VM. VM Storage Policy includes the list of required storage capabilities and only those datastores that mach them, will be suggested as a target for the VM that is assigned to that policy.

You can create storage capabilities manually, such as SSD, SAS, SATA. Or more abstract, such as Bronze, Silver and Gold and assign them to corresponding datastores. Or you can leverage VASA, which automatically assigns corresponding storage capabilities. Below is an example of a datastore connected from a Dell Compellent storage array.

datastore_capabilities

You can then use storage capabilities from the VASA provider to create VM Storage Policies and assign them to VMs accordingly.

VASA.jpg

Design Decision 3:

  • If you have more than one datastore storage type, use Profile-Driven Storage to enforce VM placement based on VM storage requirements. VASA can simplify storage capabilities management.

Conclusion

If all of your datastores have the same performance characteristics, such as a number of LUNs auto-tiered on the storage array side, then one SDRS-enabled datastore cluster is a perfect solution for you.

But if your storage design is slightly more complex and you have datastores with different performance characteristics, such as SSD, SAS and SATA, leverage Profile-Driven Storage to control VM placement and enforce compliance. Just make sure to use a separate cluster for each tier of storage and you will get the most benefit out of vSphere Storage Policy-Based Management.

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.