Posts Tagged ‘VPC’

First Look at AWS Management Portal for vCenter Part 1: Deployment

December 18, 2016

Cloud has been a hot topic in IT for quite a while, for such valid reasons and benefits it brings as agility and economies of scale. More and more customers start to embark on the cloud journey, whether it’s DR to cloud, using cloud as a Tier 3 storage or even full production migrations for the purpose of shrinking the physical data center footprint.

vmware_aws

Even though full data center migrations to cloud are not that uncommon, many customers use cloud for certain use cases and keep other more static workloads on-premises, where it may be more cost-effective. What it means is that they end up having two environments, that they have to manage separately. This introduces complexity into operational models as each environment has its own management tools.

Overview

AWS Management Portal for vCenter helps to bridge this gap by connecting your on-premises vSphere environment to AWS and letting you perform basic management tasks, such as creating VPCs and security groups, deploying EC2 instances from AMI templates and even migrating VMs from vSphere to cloud, all without leaving the familiar vCenter user interface.

connector_architecture

Solution consists of two components: AWS Management Portal for vCenter, which is configured in AWS and AWS Connector for vCenter, which is a Linux appliance deployed on-prem. Let’s start with the management portal first.

Configure Management Portal

AWS Management Portal for vCenter or simply AMP, can be accessed by the following link https://amp.aws.amazon.com. Configuration is wizard-based and its main purpose is to set up authentication for vCenter users to be able to access AWS cloud through the portal.

aws_amp.jpg

You have an option of either using SAML, which has pre-requisites, or simply choosing the connector to be your authentication provider, which is the easiest option.

If you choose the latter, you will need to pre-configure a trust relationship between AWS Connector and the portal. First step of the process is to create an Identity and Access Management (IAM) user in AWS Management Console and assign “AWSConnector” IAM policy to it (connector will then use this account to authenticate to AWS). This step is explained in detail in Option 1: Federation Authentication Proxy section of the AWS Management Portal for vCenter User Guide.

add_admin

You will also be asked to specify vCenter accounts that will have access to AWS and to generate an AMP-Connector Key. Save your IAM account Access Key / Secret and AMP-Connector Key. You will need them in AWS Connector registration wizard.

Configure AWS Connector

AWS Connector is distributed as an OVA, which you can download here:

To assign a static IP address to the appliance you will need to open VM console and log in as ec2-user with the password ec2pass. Run the setup script and change network settings as desired. Connector also supports connecting to AWS through a proxy if required.

# sudo setup.rb

Browse to the appliance IP address to link AWS Connector to your vCenter and set up appliance’s password. You will then be presented with the registration wizard.

Wizard will ask you to provide a service account for AWS Connector (create a non-privileged domain account for it) and credentials of the IAM trust account you created previously. You will also need your trust role’s ARN (not user’s ARN) which you can get from the AMP-Connector Federation Proxy section of AWS Management Portal for vCenter setup page.

If everything is done correctly, you will get to the plug-in registration page with the configuration summary, which will look similar to this:

registration_complete

Summary

AWS Connector will register a vCenter plug-in, which you will see both in vSphere client and Web Client.

aws_wclient

That completes the deployment part. In the next blog post of the series we will talk in more detail on how AWS Management Portal can be leveraged to manage VPCs and EC2 instances.

Dell Force10 Part 2: VLT Basics

July 10, 2016

dell-force10Last time I made a blog post on initial configuration of Force10 switches, which you can find here. There I talked about firmware upgrade and basic features, such as STP and Flow Control. In this blog post I would like to touch on such a key feature of Force10 switches as Virtual Link Trunking (VLT).

VLT is Force10’s implementation of Multi-Chassis Link Aggregation Group (MLAG), which is similar to Virtual Port Channels (vPC) on Cisco Nexus switches. The goal of VLT is to let you establish one aggregated link to two physical network switches in a loop-free topology. As opposed to two standalone switches, where this is not possible.

You could say that switch stacking gives you similar capabilities and you would  be right. The issue with stacked switches, though, is that they act as a single switch not only from the data plane point of view, but also from the control plane point of view. The implication of this is that if you need to upgrade a switch stack, you have to reboot both switches at the same time, which brings down your network. If you have an iSCSI or NFS storage array connected to the stack, this may cause trouble, especially in enterprise environments.

With VLT you also have one data plane, but individual control planes. As a result, each switch can be managed and upgraded separately without full network downtime.

VLT Terminology

Virtual Link Trunking uses the following set of terms:

  • VLT peer – one of the two switches participating in VLT (you can have a maximum of two switches in a VLT domain)
  • VLT interconnect (VLTi) – interconnect link between the two switches to synchronize the MAC address tables and other VLT-related data
  • VLT backup link – heartbeat link to send keep alive messages between the two switches, it’s also used to identify switch state if VLTi link fails
  • VLT – this is the name of the feature – Virtual Link Trunking, as well as a VLT link aggregation group – Virtual Link Trunk. We will call aggregated link a VLT LAG to avoid ambiguity.
  • VLT domain – grouping of all of the above

VLT Topology

This’s what a sample VLT domain looks like. S4048-ON switches have six 40Gb QSFP+ ports, two of which we use for a VLT interconnect. It’s recommended to use a static LAG for VLTi.

basic_vlt

Two 1Gb links are used for VLT backup. You can use switch out-of-band management ports for this. Four 10Gb links form a VLT LAG to the upstream core switch.

Use Cases

So where is this actually helpful? Vast majority of today’s environments are virtualized and do not require LAGs. vSphere already uses teaming on vSwitch uplinks for traffic distribution across all network ports by default. There are some use cases in VMware environments, where you can create a LAG to a vSphere Distributed Switch for faster link failure convergence or improved packet switching. Unless you have a really large vSphere environment this is generally not required, but you may use this option later on if required. Read Chris Wahl’s blog post here for more info.

Where VLT is really helpful is in building a loop-free network topology in your datacenter. See, all your vSphere hosts are connected to both Force10 switches for redundancy. Since traffic comes to either of the switches depending on which uplink is being picked on a ESXi host, you have to make sure that VMs on switch 1 are able to communicate to VMs on switch 2. If all you had in your environment were two Force10 switches, you would establish a LAG between the two and be done with it. But if your network topology is a bit larger than this and you have at least a single additional core switch/router in your environment you’d be faced with the following dilemma. How can you ensure efficient traffic switching in your network without creating loops?

stp_loop

You can no longer create a LAG between the two Force10 switches, as it will create a loop. Your only option is to keep switches connected only to the core and not to each other. And by doing that you will cause all traffic from VMs on switch 1 destined to VMs on switch 2 and vise versa to traverse the core.

east_west_traffic

And that’s where VLT comes into play. All east-west traffic between servers is contained within the VLT domain and doesn’t need to traverse the core. As shown above, if we didn’t use VLT, traffic from one switch to another would have to go from switch 1 to core and then back from core to switch 2. In a VLT domain traffic between the switches goes directly form switch 1 to switch 2 using VLTi.

Conclusion

That’s a brief introduction to VLT theory. In the next few posts we will look at how exactly VLT is configured and map theory to practice.