First Look at AWS Management Portal for vCenter Part 1: Deployment

December 18, 2016

Cloud has been a hot topic in IT for quite a while, for such valid reasons and benefits it brings as agility and economies of scale. More and more customers start to embark on the cloud journey, whether it’s DR to cloud, using cloud as a Tier 3 storage or even full production migrations for the purpose of shrinking the physical data center footprint.

vmware_aws

Even though full data center migrations to cloud are not that uncommon, many customers use cloud for certain use cases and keep other more static workloads on-premises, where it may be more cost-effective. What it means is that they end up having two environments, that they have to manage separately. This introduces complexity into operational models as each environment has its own management tools.

Overview

AWS Management Portal for vCenter helps to bridge this gap by connecting your on-premises vSphere environment to AWS and letting you perform basic management tasks, such as creating VPCs and security groups, deploying EC2 instances from AMI templates and even migrating VMs from vSphere to cloud, all without leaving the familiar vCenter user interface.

connector_architecture

Solution consists of two components: AWS Management Portal for vCenter, which is configured in AWS and AWS Connector for vCenter, which is a Linux appliance deployed on-prem. Let’s start with the management portal first.

Configure Management Portal

AWS Management Portal for vCenter or simply AMP, can be accessed by the following link https://amp.aws.amazon.com. Configuration is wizard-based and its main purpose is to set up authentication for vCenter users to be able to access AWS cloud through the portal.

aws_amp.jpg

You have an option of either using SAML, which has pre-requisites, or simply choosing the connector to be your authentication provider, which is the easiest option.

If you choose the latter, you will need to pre-configure a trust relationship between AWS Connector and the portal. First step of the process is to create an Identity and Access Management (IAM) user in AWS Management Console and assign “AWSConnector” IAM policy to it (connector will then use this account to authenticate to AWS). This step is explained in detail in Option 1: Federation Authentication Proxy section of the AWS Management Portal for vCenter User Guide.

add_admin

You will also be asked to specify vCenter accounts that will have access to AWS and to generate an AMP-Connector Key. Save your IAM account Access Key / Secret and AMP-Connector Key. You will need them in AWS Connector registration wizard.

Configure AWS Connector

AWS Connector is distributed as an OVA, which you can download here:

To assign a static IP address to the appliance you will need to open VM console and log in as ec2-user with the password ec2pass. Run the setup script and change network settings as desired. Connector also supports connecting to AWS through a proxy if required.

# sudo setup.rb

Browse to the appliance IP address to link AWS Connector to your vCenter and set up appliance’s password. You will then be presented with the registration wizard.

Wizard will ask you to provide a service account for AWS Connector (create a non-privileged domain account for it) and credentials of the IAM trust account you created previously. You will also need your trust role’s ARN (not user’s ARN) which you can get from the AMP-Connector Federation Proxy section of AWS Management Portal for vCenter setup page.

If everything is done correctly, you will get to the plug-in registration page with the configuration summary, which will look similar to this:

registration_complete

Summary

AWS Connector will register a vCenter plug-in, which you will see both in vSphere client and Web Client.

aws_wclient

That completes the deployment part. In the next blog post of the series we will talk in more detail on how AWS Management Portal can be leveraged to manage VPCs and EC2 instances.

Puppet Camp 2016 Recap

December 4, 2016

puppet-campLast week I had a chance to attend Puppet Camp 2016. Puppet Camp is a one day event that is held once a year in many places around the world including Australia. This time it was the fourth Melbourne conference, which gathered 240 attendees and several key partners, such as NetApp, Diaxion and Katana1.

In this blog post I want to give a quick overview of the keynote, customer and partner sessions, as well as my key takeaways from the conference.

First Impressions

I’ve never been to Puppet Camp before and this was my first experience. Sheer number of participants clearly shows that areas of configuration management and DevOps in general attracts a lot of attention of both customers and channel.

imag5067_2

You may have heard how Cisco in Q3 of 2015 announced Puppet support for the Nexus 3000 and 9000 series switches. This was not just an accident. I had a chance to speak to NetApp, who was one of the vendors presented at the conference, and they now have Puppet integration with their Data ONTAP / FAS platform, as well as E-Series and recently acquired SolidFire line of storage arrays. I’m sure many other hardware vendors will follow.

Keynote and Puppet Update

The conference had one track of sessions spread out throughout the day and was opened by a keynote from Robert Finn, APJ Sales Director at Puppet, who was talking about the raising complexity of modern IT environments and challenges that come with it. We have gone from tens of servers to hundreds of VMs and are now on the verge of the next evolution from hundreds of VMs to thousands of containers. We can no longer manage environments manually and that is where tools such as Puppet come into play and let us manage configuration and provisioning at scale.

Rob also mentioned the “State of DevOps Report” an annual survey Puppet has been running now for five years in a row. In 2016 they collected responses from 4600 technical professionals and shared a lot of their findings in a public report, which I’ll link in the references section below.

state_of_devops

Key takeaways: introducing configuration management in their software development practices organizations were able to achieve 3x lower change failure rate and 24x faster recovery from failures.

Ronny Sabapathee, Puppet Solutions Engineer gave an overview of the new features in the latest Puppet Enterprise 2016.4, such as corrective change reporting, changes to Puppet Orchestrator, enhancements in Code Manager and API improvements.

Key takeaways: Puppet ecosystem is growing quickly with Docker module, Jenkins plugin, significant enhancements in Azure module and VMware vRealize Automation/Orchestrator integration coming soon.

Customer Sessions

Rob Kenefik from SpecSavers spoke about their journey of scaling free version of Puppet from 10 to 290 nodes, what issues they came across and what adjustments they had to make, especially around the DB back-end.

Key takeaways: don’t use embedded Puppet database for production deployments. PostgreSQL (which is now default) provides required scalability.

Steve Curtis from ANZ briefly discussed how they automated deployment of Application Performance Monitoring (APM) agents using Puppet. Steve also has a post in Puppet blog, which I’ll link below.

Chris Harwood from Healthdirect Australia touched on a sensitive topic of organizational silos and how teams become too focused on their own performance forgetting about the customers, who should be the key priority for businesses offering customer-facing services.

Then he showed how Healthdirect moved some of the ops people to development teams giving devs access to infrastructure and making them autonomous, which significantly improved their development workflows and release frequency.

Key takeaways: DevOps key challenges are around people and processes, not technology. Teams not collaborating and lengthy infrastructure change management processes can significantly hinder development teams’ performance.

Partner Sessions

Dinesh Siriwardhane who represented Versent compared pros and cons of master/agent vs. masterless Puppet deployments and showed a demo on Puppet certificate management.

Key takeaways: Puppet master simplifies centralized management, provides reporting capabilities, but can be a single point of failure. Agentless deployment using GitHub has no single points of failure and is free, but can have major security repercussions if Git repository is compromised.

imag5085_1

Kieran Sweet and Pedram Sanayei from Sourced made a presentation on Puppet integration with Azure and how using Puppet instead of just the low-level Azure APIs and PowerShell, can significantly simplify deployment and configuration management in the Microsoft cloud.

Key takeaways: Azure Resource Manager is a big step forward from the old Azure Service Management (classic deployment model). In light of the significant recent enhancements in the Azure Puppet module, this can become a reasonable alternative to AWS.

Scott Coulton from Autopilot closed the conference with a session on Puppet integration with Docker and more specifically around container orchestration tools, such as Docker Swarm, Kubernetes, Mesos and Flocker. Be sure to check Scott’s blog and GitHub repository where you can find a Puppet module for Docker Swarm, Vagrant template and more.

Key takeaways: Docker can be used to deploy containers, but Puppet is still essential to keep configuration across the hosts consistent.

Conclusion

I spoke to a lot of customers at the conference and what became apparent to me was that Puppet is not just another DevOps tool amongst the many. It has a wide ecosystem of partners and has gone a long way since they started as a small start-up 12 years ago in 2005.

It has a strong use case for general configuration management in Linux environments, as well as providing application configuration consistency as part of CI/CD pipelines.

Speaking of the conference itself I was pleasantly surprised by the quality of sessions and organization in general. Puppet Camp will definitely stay on my radar. I’d love to come back next year and geek out with the DevOps crowd again.

References

Dell Force10 Part 3: VLT Domain Configuration

July 31, 2016

dell-force10In my previous post here I went through VLT basics and how it helps to establish a loop-free network topology in a modern datacenter. Now lets dive deeper and see how VLT is configured from FTOS CLI.

VLT Configuration

The first step is to configure the backup links and VLT interconnect. Dell S4048-ON switches have six 40Gb QSFP+ ports, two of which 1/49 and 1/50 we will use for VLTi. Repeat the same configuration on both switches.

# int range fo 1/49-1/50
# no shutdown

# interface port-channel 127
# description “VLT interconnect”
# channel-member fo 1/49
# channel-member fo 1/50
# no shutdown

Now that we have a VLT interconnect set up, let’s join the first switch to a VLT domain:

# vlt domain 1
# back-up destination 172.10.10.11
# peer-link port-channel 127
# primary-priority 1

First switch points to the second switch management IP for a backup destination, uses port channel 127 as a VLT interconnect and becomes a primary peer, because it’s given the lowest priority of 1.

Do the same on the second switch, but now point to the first switch management IP for backup and use the highest priority to make this switch a secondary peer:

# vlt domain 1
# back-up destination 172.10.10.10
# peer-link port-channel 127
# primary-priority 8192

To confirm the VLT state use the following command:

# sh vlt brief

vlt_brief

As you can see, the VLTi and backup links are up and the switch can see its peer. For some additional VLT specific information use these commands:

# sh vlt statistics
# sh vlt backup-link

I would also recommend to use the following command to see the port channel state and confirm that both VLTi links are in up state:

# sh int po127

po_state

Conclusion

In this part of the Dell Force10 switch configuration series we quickly went through the initial VLT setup. We haven’t touched on VLT LAG configuration yet. We will take a closer look at it in the next blog post.

Dell Force10 Part 2: VLT Basics

July 10, 2016

dell-force10Last time I made a blog post on initial configuration of Force10 switches, which you can find here. There I talked about firmware upgrade and basic features, such as STP and Flow Control. In this blog post I would like to touch on such a key feature of Force10 switches as Virtual Link Trunking (VLT).

VLT is Force10’s implementation of Multi-Chassis Link Aggregation Group (MLAG), which is similar to Virtual Port Channels (vPC) on Cisco Nexus switches. The goal of VLT is to let you establish one aggregated link to two physical network switches in a loop-free topology. As opposed to two standalone switches, where this is not possible.

You could say that switch stacking gives you similar capabilities and you would  be right. The issue with stacked switches, though, is that they act as a single switch not only from the data plane point of view, but also from the control plane point of view. The implication of this is that if you need to upgrade a switch stack, you have to reboot both switches at the same time, which brings down your network. If you have an iSCSI or NFS storage array connected to the stack, this may cause trouble, especially in enterprise environments.

With VLT you also have one data plane, but individual control planes. As a result, each switch can be managed and upgraded separately without full network downtime.

VLT Terminology

Virtual Link Trunking uses the following set of terms:

  • VLT peer – one of the two switches participating in VLT (you can have a maximum of two switches in a VLT domain)
  • VLT interconnect (VLTi) – interconnect link between the two switches to synchronize the MAC address tables and other VLT-related data
  • VLT backup link – heartbeat link to send keep alive messages between the two switches, it’s also used to identify switch state if VLTi link fails
  • VLT – this is the name of the feature – Virtual Link Trunking, as well as a VLT link aggregation group – Virtual Link Trunk. We will call aggregated link a VLT LAG to avoid ambiguity.
  • VLT domain – grouping of all of the above

VLT Topology

This’s what a sample VLT domain looks like. S4048-ON switches have six 40Gb QSFP+ ports, two of which we use for a VLT interconnect. It’s recommended to use a static LAG for VLTi.

basic_vlt

Two 1Gb links are used for VLT backup. You can use switch out-of-band management ports for this. Four 10Gb links form a VLT LAG to the upstream core switch.

Use Cases

So where is this actually helpful? Vast majority of today’s environments are virtualized and do not require LAGs. vSphere already uses teaming on vSwitch uplinks for traffic distribution across all network ports by default. There are some use cases in VMware environments, where you can create a LAG to a vSphere Distributed Switch for faster link failure convergence or improved packet switching. Unless you have a really large vSphere environment this is generally not required, but you may use this option later on if required. Read Chris Wahl’s blog post here for more info.

Where VLT is really helpful is in building a loop-free network topology in your datacenter. See, all your vSphere hosts are connected to both Force10 switches for redundancy. Since traffic comes to either of the switches depending on which uplink is being picked on a ESXi host, you have to make sure that VMs on switch 1 are able to communicate to VMs on switch 2. If all you had in your environment were two Force10 switches, you would establish a LAG between the two and be done with it. But if your network topology is a bit larger than this and you have at least a single additional core switch/router in your environment you’d be faced with the following dilemma. How can you ensure efficient traffic switching in your network without creating loops?

stp_loop

You can no longer create a LAG between the two Force10 switches, as it will create a loop. Your only option is to keep switches connected only to the core and not to each other. And by doing that you will cause all traffic from VMs on switch 1 destined to VMs on switch 2 and vise versa to traverse the core.

east_west_traffic

And that’s where VLT comes into play. All east-west traffic between servers is contained within the VLT domain and doesn’t need to traverse the core. As shown above, if we didn’t use VLT, traffic from one switch to another would have to go from switch 1 to core and then back from core to switch 2. In a VLT domain traffic between the switches goes directly form switch 1 to switch 2 using VLTi.

Conclusion

That’s a brief introduction to VLT theory. In the next few posts we will look at how exactly VLT is configured and map theory to practice.

Dell Force10 Part 1: Initial Configuration

July 3, 2016

force10_S4048_on
When it comes to networking Dell has two main series of switches. PowerConnect/N-series, which run DNOS 6.x operating system. And S/Z-series switches, which run on DNOS 9.x derived from Force10 OS (FTOS). In this series of blogs we will go through the configuration of Force10 switch series and use Dell S4048-ON top of the rack switch as an example.

Interesting to note, that unlike other S-series switches S4048-ON is an Open Networking switch. Dell is one of the first companies which apart from its own OS lets customers run other operating systems on its network switches, such as Cumulus Linux OS and Big Switch Networks Switch Light OS. While Cumulus and Big Switch has its own use cases, in this blog we will look specifically at configuring FTOS.

Boot process

S4048-ON comes from the factory pre-configured for bare metal provisioning (BMP). This is what you will see when you boot the switch for the first time:

s4048_bmp

If you just want to boot FTOS, simply skip the BMP by choosing A and switch will boot the OS.

After some time BMP will time out. If you’ve missed the above wizard, you can also disable BMP from CLI using the following commands:

> enable
# stop bmp
# config
# reload-type normal-reload
# exit
# reload

When prompted choose to save the configuration and proceed with reload. After the switch has rebooted check that the next boot is set to normal reload:

# show reload-type

Initial configuration

First steps of any switch installation is assigning a hostname and management interface settings:

# hostname DELL4048-SWITCH
# int managementethernet 1/1
# ip address 172.10.10.2/24
# no shut
# management route 0.0.0.0/0 172.10.10.10

Then set admin / enable passwords and allow remote management via SSH:

# enable password 123456
# username admin password 123456
# ip ssh server enable

Configure time zone and NTP:

# clock timezone UTC 11
# ntp server 172.10.10.20
# show ntp associations
# show ntp status
# show clock

Firmware upgrade

Force10 switches have two boot banks A: and B:. It’s a good practice to upload new firmware into one boot bank and keep the old firmware in the other in case you need to roll back.

The easiest way to upgrade is via TFTP using Tftpd64, which you can download for free from here. If you’re upgrading an existing switch, make sure to save the running config and make a backup. If it’s an initial install you can skip this step.

# copy run start
# copy start tftp://10.0.0.1/FORCE10_SWITCH_01.01.16.conf

Then upload new firmware to image B:, change active boot bank to B: and reload:

# show version
# show boot system stack-unit 1
# upgrade system tftp://10.0.0.1/FTOS-SK-9.9.0.0P9.bin b:
# conf t
# boot system stack-unit 1 primary system b:
# exit
# reload

You will be prompted to save the configuration and reboot. After the reboot you may be asked to enable SupportAssist. SuppotAssist helps to automatically open Dell service tickets if there is a switch fault. You can enable SupportAssist by running the following commands and answering prompts:

supportassist

# conf t
# support-assist activate
# support-assist activity full-transfer start now
# show support-assist status

My pair of switches were configured in a Virtual Link Trunking (VLT) domain. I’ll explain how VLT works later in the series. But from the upgrade point of view, each switch in a VLT domain is treated as a separate switch and has to be upgraded separately. If you decided to use a stack instead of VLT, you can find the upgrade process for a Force10 stack in my other post about Dell MXL switches here.

Spanning tree

Spanning Tree Protocol (STP) helps to prevent network topology loops and is highly recommended for use in any network. Switches connected in an actual loop topology in today’s networks are rare. But STP can save you from consequences of a potential human error, such as port channel misconfiguration. If instead of creating one port channel with two links, you by mistake create two port channels with one link each and both carry the same VLANs, you’ve accidentally created a loop, which will bring your whole network to an immediate halt.

It’s a good practice to enable STP as a safeguard mechanism from such configuration errors. S4048-ON supports STP, RSTP, MSTP and PVST+. In my case S4048s were uplinked into HP core, which supported STP, RSTP and MSTP. If you have Cisco switches in your network core you can use PVST+. In my case I used RSTP, which is a good choice if you don’t require enhancements of MSTP and PVST+ in your network. Just make sure to not use the basic STP protocol, as it provides the slowest convergence.

# protocol spanning-tree rstp
# no disable
# show spanning-tree rstp

In every STP topology there is also a root switch, which by default is selected automatically. For a more deterministic STP behaviour it’s recommended to select the root switch manually, by assigning the lowest STP priority to it. Typically your core switch should be your root switch. In my case it was a HP core switch, which was assigned priority of “0”.

When configuring server and storage facing ports make sure to enable EdgePort mode to minimize the time it takes for the port to come online:

# int range Te1/45-1/48
# spanning-tree rstp edge-port
# switchport
# no shut

If you want to know more about how STP works, you can read a few of my previous blog posts on STP here and here.

Flow control

To avoid dropped packets on 10Gb switch ports at times of potential heavy utilization it is also a best practice to as a minimum enable bi-directional Flow Control on the storage array ports. I enabled it on the iSCSI links connected from the Dell Compellent storage array:

# int range Te1/17-1/18
# flowcontrol rx on tx on

If you specifically interested in switch best practices for Compellent and EqualLogic storage arrays, Dell has a full list of guides for various switches at communitites wiki here.

Port channels and VLANs

Port channels and VLANs are configured similarly to any other switch, but I include them here in case you want to know the syntax. In this example we have two access ports 1/46 and 1/47 and an uplink to the core configured as port channel 1:

# interface port-channel 1
# switchport
# no shutdown

# interface range Te1/1-1/2
# port-channel-protocol LACP
# port-channel 1 mode active
# no shutdown

# int vlan 254
# untagged Te1/46-1/47
# tagged po 1

Keep in mind, that port channels are used either in one switch configurations or when two or more switches are stacked together. If you’re using Virtual Link Trunking (VLT), you will need to create Virtual Link Trunks (VLTs). Which are similar to port channels, but have a slightly different syntax. We will talk about VLT in much more detail in the following Force10 blogs.

Conclusion

One feature which I didn’t specifically mentioned in this blog post was Jumbo Frames. I tend not to use it in my deployments until I see convincing evidence of it making a difference for iSCSI/NFS storage implementations. I did a post about Jumbo Frames long time ago here and hasn’t changed my opinion ever since. Interested to here your thoughts if have a different take on that.

References

vSphere SDRS Design Considerations

June 26, 2016

data storageIf you happen to have your vSphere cluster to be licensed with Enterprise Plus edition, you may be aware of some of the advanced storage management features it includes, such as Storage DRS and Profile-Driven Storage.

These two features work together to let you optimise VM distribution between multiple VMware datastores from capability, capacity and latency perspective, much like DRS does for memory and compute. But they have some interoperability limitations, which I want to discuss in this post.

Datastore Clusters

In simple terms, datastore cluster is a collection of multiple datastores, which can be seen as a single entity from VM provisioning perspective.

datastore_cluster

VMware poses certain requirements for datastore clustes, but in my opinion the most important one is this:

Datastore clusters must contain similar or interchangeable datastores.

In other words, all of the datastores within a datastore cluster should have the same performance properties. You should not mix datastores provisioned on SSD tier with datastores on SAS and SATA tier and vise versa. The reason why is simple. Datastore clusters are used by SDRS to load-balance VMs between the datastores of a datastore cluster. DRS balances VMs based on datastore capacity and I/O latency only and is not storage capability aware. If you had SSD, SAS and SATA datastores all under the same cluster, SDRS would simply move all VMs to SSD-backed datastores, because it has the lowest latency and leave SAS and SATA empty, which makes little sense.

Design Decision 1:

  • If you have several datastores with the same performance characteristics, combine them all in a datastore cluster. Do not mix datastores from different arrays or array storage tiers in one datastore cluster. Datastore clusters is not a storage tiering solution.

Storage DRS

As already mentioned, SDRS is a feature, which when enabled on a datastore cluster level, lets you automatically (or manually) distribute VMs between datastores based on datastore storage utilization and I/O latency basis. VM placement recommendations and datastore maintenance mode are amongst other useful features of SDRS.

storage_drs

Quite often SDRS is perceived as a feature that can work with Profile-Driven Storage to enforce VM Storage Policy compliance. One of the scenarios, that is often brought up is what if there’s a VM with multiple .vmdk disks. Each disk has a certain storage capability. Mistakenly one of the disks has been storage vMotion’ed to a datastore, which does not meet the storage capability requirements. Can SDRS automatically move the disk back to a compliant datastore or notify that VM is not compliant? The answer is – no. SDRS does not take storage capabilities into account and make decisions only based on capacity and latency. This may be implemented in future versions, but is not supported in vSphere 5.

Design Decision 2:

  • Use datastore clusters in conjunction with Storage DRS to get the benefit of VM load-balancing and placement recommendations. SDRS is not storage capability aware and cannot enforce VM Storage Policy compliance.

Profile-Driven Storage

So if SDRS and datastore clusters are not capable of supporting  multiple tiers of storage, then what does? Profile-Driven Storage is aimed exactly for that. You can assign user-defined or system-defined storage capabilities to a datastore and then create a VM Storage Policy and assign it to a VM. VM Storage Policy includes the list of required storage capabilities and only those datastores that mach them, will be suggested as a target for the VM that is assigned to that policy.

You can create storage capabilities manually, such as SSD, SAS, SATA. Or more abstract, such as Bronze, Silver and Gold and assign them to corresponding datastores. Or you can leverage VASA, which automatically assigns corresponding storage capabilities. Below is an example of a datastore connected from a Dell Compellent storage array.

datastore_capabilities

You can then use storage capabilities from the VASA provider to create VM Storage Policies and assign them to VMs accordingly.

VASA.jpg

Design Decision 3:

  • If you have more than one datastore storage type, use Profile-Driven Storage to enforce VM placement based on VM storage requirements. VASA can simplify storage capabilities management.

Conclusion

If all of your datastores have the same performance characteristics, such as a number of LUNs auto-tiered on the storage array side, then one SDRS-enabled datastore cluster is a perfect solution for you.

But if your storage design is slightly more complex and you have datastores with different performance characteristics, such as SSD, SAS and SATA, leverage Profile-Driven Storage to control VM placement and enforce compliance. Just make sure to use a separate cluster for each tier of storage and you will get the most benefit out of vSphere Storage Policy-Based Management.

vDS Health Check: Useful, but Overlooked

June 11, 2016

healthcheckAs of June 30, 2016 vSphere Enterprise licence will no longer be available. As more and more customers start moving to Enterprise Plus licencing scheme, we will see wider adoption of Enterprise Plus features, such as vSphere Distributed Switch, SIOC, NIOC and Storage DRS.

Therefore, there will be a continuing demand in better coverage of these features and I want to start blogging about them more to meet this demand. And the first blog will be about one of the hidden gems – vSphere distributed switch Health Check.

Feature overview

The reason why I picked health check specifically is because it’s very helpful when troubleshooting connectivity issues on vSphere distributed switch uplinks. But at the same time it’s lesser known, because it’s buried deep in vDS setting section, available only from the Web Client and is disabled by default.

vDS health check is capable of doing the following tests:

  • VLAN and MTU
  • Teaming and failover

By sending broadcasts from one link and receiving them from another, vDS health check can determine if a VLAN is not allowed on a trunk or there is an MTU mismatch. In the same way if you’re using LACP, vDS will alert you if there are any port channel misconfigurations.

Usage example

Before you can start using vDS health check you need to enable it in vSphere Web Client > Networking > dvSwitch > Manage > Settings > Health check. Click on the Edit button and enable both tests.

enable_healthcheck

Now if you go to the Monitor tab and click on the Health section, after a few minutes of initial checks you will see a per host breakdown of identified issues.

healthcheck_results

In my case I was able to immediately determine that VLAN 120 was not trunked on the physical switch. The port group this VLAN ID was assigned to had no VMs at the time. And the issues was fixed proactively, before it could start causing issues.

vlan_mismatch

Possible use cases

The above example is a very straightforward one. VLAN was not added to the trunk port on the physical switch on any of the uplink ports and the issue would’ve been determined right after the first VM was added to the port group.

But what if the VLAN was missing only on one of the host’s uplinks? VM would be running fine on another host and after a vMotion (during a potential maintenance work on that host) it could get migrated to the affected host and lose connectivity. Result – impact to production workloads and time wasted on troubleshooting.

MTU checks are particularly helpful for the environments where a non-standard MTU size is used, such as 9000 byte jumbo frames for iSCSI. It’s important for MTU to match on both vDS and physical switch. This check confirms exactly that.

And last but not least, teaming and failover tests can be useful when you’re using LACP capability of vDS and one of the uplinks is not added to the port channel configuration, which can also cause some nasty issues.

Conclusion

In my opinion vSphere Distributed Switch Health Check is one of those valuable, but overlooked features. I suggest to give it a go if you haven’t already done so. It will notify you for any newly introduced network issues or who knows, maybe it will even find a network mismatch in your current vDS configuration.

Force10 and vSphere vDS Interoperability Issue

June 10, 2016

dell-force10Recently I had an opportunity to work with Dell FX2 platform from the design and delivery point of view. I was deploying a FX2s chassis with FC630 blades and FN410S 10Gb I/O aggregators.

I ran into an interesting interoperability glitch between Force10 and vSphere distributed switch when using LLDP. LLDP is an equivalent of Cisco CDP, but is an open standard. And it allows vSphere administrators to determine which physical switch port a given vSphere distributed switch uplink is connected to. If you enable both Listen and Advertise modes, network administrators can get similar visibility, but from the physical switch side.

In my scenario, when LLDP was enabled on a vSphere distributed switch, uplinks on all ESXi hosts started disconnecting and connecting back intermittently, with log errors similar to this:

Lost uplink redundancy on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is down.

Network connectivity restored on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is up

Uplink redundancy restored on DVPorts: “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”, “1549/03 4b 0b 50 22 3f d7 8f-28 3c ff dd a4 76 26 15”. Physical NIC vmnic1 is up

Issue Troubleshooting

FX2 I/O aggregator logs were reviewed for potential errors and the following log entries were found:

%STKUNIT0-M:CP %DIFFSERV-5-DSM_DCBX_PFC_PARAMETERS_MISMATCH: PFC Parameters MISMATCH on interface: Te 0/2

%STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Changed interface state to down: Te 0/2

%STKUNIT0-M:CP %IFMGR-5-OSTATE_UP: Changed interface state to up: Te 0/2

This clearly looks like some DCB negotiation issue between Force10 and the vSphere distributed switch.

Root Cause

Priority Flow Control (PFC) is one of the protocols from the Data Center Bridging (DCB) family. DCB was purposely built for converged network environments where you use 10Gb links for both Ethernet and FC traffic in the form of FCoE. In such scenario, PFC can pause Ethernet frames when FC is not having enough bandwidth and that way prioritise the latency sensitive storage traffic.

In my case NIC ports on Qlogic 57840 adaptors were used for 10Gb Ethernet and iSCSI and not FCoE (which is very uncommon unless you’re using Cisco UCS blade chassis). So the question is, why Force10 switches were trying to negotiate FCoE? And what did it have to do with enabling LLDP on the vDS?

The answer is simple. LLDP not only advertises the port numbers, but also the port capabilities. Data Center Bridging Exchange Protocol (DCBX) uses LLDP when conveying capabilities and configuration of FCoE features between neighbours. This is why enabling LLDP on the vDS triggered this. When Force10 switches determined that vDS uplinks were CNA adaptors (which was in fact true, I was just not using FCoE) it started to negotiate FCoE using DCBX. Which didn’t really go well.

Solution

The easiest solution to this problem is to disable DCB on the Force10 switches using the following command:

# conf t
# no dcb enable

Alternatively you can try and disable FCoE from the ESXi end by using the following commands from the host CLI:

# esxcli fcoe nic list
# esxcli fcoe nic disable -n vmnic0

Once FCoE has been disabled on all NICs, run the following command and you should get an empty list:

# esxcli fcoe adapter list

Conclusion

It is still not clear why PFC mismatch would cause vDS uplinks to start flapping. If switch cannot establish a FCoE connection it should just ignore it. Doesn’t seem to be the case on Force10. So if you run into a similar issue, simply disable DCB on the switches and it should fix it.

History of vSphere Storage Size Limitations

June 5, 2016

data-storageThis seems like a straightforward topic. If you are on vSphere 6 you can create VMFS datastores and VM disks as big as 64TB (62TB for VM disks to be precise). The reality is, not all customers are running the latest and greatest for various reasons. The most common one is concerns about reliability. vSphere 6 is still at version 6.0. Once vSphere 6.1 comes out we will see wider adoption. At this stage I see various versions of vSphere 5 in the field. And even vSphere 4 at times, which is officially not supported by VMware since May 2015. So it’s not surprising I still get this question, what are the datastore and disk size limits for various vSphere versions?

Datastore size limit

The biggest datastore size for both VMFS3 and VMFS5 is 64TB. What you need to know is, VMFS3 file system uses MBR partition style, which is limited to 2TB. The way VMFS3 overcomes this limitation is by using extents. To extend VMFS3 partition to 64TB you would need 32 x 2TB LUNs on the storage array. VMFS5 file system has GPT partition style and can be extended to 64TB by expanding one underlying LUN without using extents, which is a big plus.

VMFS3 datastores are rare these days, unless you’re still on vSphere 4. The only consideration here is whether your vSphere 5 environment was a greenfield build. If the answer is yes, then all your datastores are VMFS5 already. If environment was upgraded from vSphere 4, you need to make sure all datastores have been upgraded (or better recreated) to VMFS5 as well. If the upgrade wasn’t done properly you may still have some VMFS3 datastores in your environment.

Disk size limit

For .vmdk disks the limitation had been 2TB for a long time, until VMware increased the limit to 62TB in vSphere 5.5. So if all of your datastores are VMFS5, this means you still have 2TB  .vmdk limitation if you’re on 5.0 or 5.1.

For VMFS3 file system you also had an option to choose block size – 1MB, 2MB, 4MB or 8MB. 2TB .vmdk disks were supported only with the 8MB block size. The default was 1MB. So if you chose the default block size during datastore creation you were limited to 256GB .vmdk disks.

The above limits led to proliferation of Raw Device Mapping disks in many pre 5.5 environments. Those customers who needed VM disks bigger than 2TB had to use RDMs, as physical RDMs starting from VMFS5 supported 64TB (pRDMs on VMFS3 were still limited to 2TB).

This table summarises storage configuration maximums for vSphere version 4.0 to 6.0:

vSphere Datastore Size VMDK Size pRDM Size
4.0 64TB 2TB 2TB
4.1 64TB 2TB 2TB
5.0 64TB 2TB 64TB
5.1 64TB 2TB 64TB
5.5 64TB 62TB 64TB
6.0 64TB 64TB 64TB

Summary

The bottom line is, if you’re on vSphere 5.5 or 6.0 and all your datastores are VMFS5 you can forget about the legacy .vmdk disk size limitations. And if you’re not on vSphere 5.5 you should consider upgrading as soon as possible, as vSphere 5.0 and 5.1 are coming to an end of support on 24 of August 2016.

Dell Repository Manager: Bootable ISO Issues

May 23, 2016

problem_solutionIn one of my previous posts I described the process of upgrading a Dell FX2 chassis firmware using Dell Repository Manager (DRM).

In an ideal world you just follow the process and in an hour or two you can get your chassis upgraded. You may sometimes run into issues. I want to go through some of them in this post, including possible remediation.

Issue Description

When exporting firmware to a bootable ISO you can find DRM not being able to download some of the bundle components with the following error in the Job Result:

Processing failed:
Failed downloading files:
Diagnostics_Application_PWMC8_LN64_OSC_1.1_A00.BIN

And errors in the Log:


60. 24/03/2016 5:58:50 PM Export to Bootable ISO : Downloaded 34 / 56
61. 24/03/2016 5:59:44 PM Export to Bootable ISO : Error downloading some files
62. 24/03/2016 5:59:45 PM Export to Bootable ISO : Failed exporting to Bootable ISO.

Workaround #1: Skip the Component

You can try the following option “Continue download irrespective of any error (in the selected components)” in the export dialog. It won’t help to get the component downloaded, but you will got a bootable ISO.

However, DRM will still keep the failed component in the bundle and try to install it during the upgrade, which will obviously fail (update 16/56):

failed_update

Once the upgrade is finished you will get the following error at the end:

Note: Some update requires machine reboot. Please reboot to CD/DVD to continue for the failed update because of the dependency…

upgrade_status

No matter how many times you reboot you will obviously get the same errors. You can ignore it if you 100% sure this is what causes the upgrade to fail or use Workaround #2.

Workaround #2: Create Custom ISO

When you create a repository in DRM it’s populated with pre-built components and bundles. But you can create custom repositories. The idea is that you can exclude the failed component from the repository by creating it manually.

Assuming you already have the base repository configured, do the following:

  • Open the existing repository and click on the Components tab
  • Deselect the failed component in the component list (in my case it was Diagnostics_Application_PWMC8_LN64_OSC_1.1_A00.BIN)
  • Click on the “Copy To” button:

custom_components

  • In the opened dialogue select “Create NEW Repository and copy component(s) into it”
  • Follow the wizard and when you click finish, components will be copied to the newly created repository
  • Open the new repository and click on the Componenets tab
  • Select all components and click on the “Copy to” button once again
  • This time select “Create a NEW Bundle in the same repository and add component(s) into it”
  • On the next screen give the bundle a name and make sure to choose “Linux 32-bit and 64-bit” in the OS Type

custom_bundle

As a result you should get a new bundle created which you can export to a bootable ISO using the same process.

Workaround #3: Use Server Update Utility

If none of the above helps you can fall back to a proven upgrade approach and use Server Update Utility (SUU). SUU is a huge 12GB ISO to download, but you can use Dell Download Manager, which supports resuming interrupted downloads. Make sure to disable proxy! Dell Download Manager does not support resuming an interrupted download if you’re using a proxy server.

SUU is not a bootable ISO. Previously you had to use Dell Systems Build and Update Utility (SBUU) to boot from it first and then mount the ISO to proceed with the upgrade. Starting with Dell 11G servers you don’t need it anymore and can upgrade firmware straight form Dell Lifecycle Controller (LC).

You’ll need to boot into the Lifecycle Controller and choose Firmware Update > Launch Firmware Update > Local Drive(CD or DVD or USB). Mount the SUU ISO and the rest is fairly straightforward. LC will upgrade the firmware and reboot the blade.

lc_upgrade

Conclusion

Dell Repository Manager is the recommended approach to upgrade firmware on Dell hardware. Unlike SUU, DRM downloads the latest updates and only the necessary components. It is also capable of making a bootable ISO.

If you have issues, rely on Server Update Utility as it’s bulletproof and always work. But be prepared to download a 12GB ISO image and make sure you have an option to bypass proxy.