Creating vRealize Operations Manager Alerts Using REST API

September 11, 2018

Whenever I’m faced with a repetitive configuration task, I search for ways to automate it. There’s nothing more boring than sitting and clicking through the GUI for hours performing the same thing over and over again.

These days most of the products I work with support REST API interface, so scripting has become my solution of choice. But scripting requires you to know a scripting language, such as PowerShell, certain SDKs and APIs, like PowerCLI and REST and more importantly – time to write the script and test it. If you’re going to use this script regularly, in the long-term it’s worth the effort . But what if it’s a one-off task? You may well end up spending more time writing a script, than it takes to perform the task manually. In this case there are more practical ways to improve your efficiency. One of such ways is to use developer tools like Postman.

The idea is that you can write a REST request that applies a certain configuration setting and use it as a template to make multiple calls by slightly tweaking the parameters. You would have to change the parameters manually for each request, which is not as elegant as providing an array of variables to a script, but still much quicker than clicking through the GUI.

Recently I worked on a VMware Validated Design (VVD) deployment for a customer, which required configuring dozens of vRealize Operations Manager alerts as part of the build. So I will use it as an example to demonstrate how you can save yourself hours by doing it in Postman, instead of GUI.

Collect Alert Properties

To create an alert in vROps you will need to specify certain alert properties in the REST API call body. You will need at least:

  • “pluginId” – ID of the outbound plugin, which is usually the Standard Email Plugin
  • “emailaddr” – recipient email address
  • “values” property under the alertDefinitionIdFilters XML element – this is the alert definition ID
  • “resourceKind” – resource that the alert is applicable for, such as VirtualMachine, Datastore, etc.
  • “adapterKind” – this is the adapter that the alert comes from, such VMWARE, NSX, etc.

To determine the pluginId you will need to configure an outbound plugin in vROps and then make the following GET call to get the ID:

To find values for alert definition, resource kind and adapter kind properties, make the following get call and search for the alert name in the results:

Create Alert in vROps

To create an alert in vROps, you will need to make a POST call to the following URI in XML format:

  • Use the following request URL: https://vrops-hostname/suite-api/api/notifications/rules
  • Click on Headers tab and specify the following key “Content-Type” and value “application/xml”
  • Click on Body tab and choose raw, in the drop-down choose “XML (application/xml)”
  • Copy the following XML request to the body and use it as a template
<ops:notification-rule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:ops="http://webservice.vmware.com/vRealizeOpsMgr/1.0/">
<ops:name>
No data received for Windows platform
</ops:name>
<ops:pluginId>c5f60db9-eb5b-47c1-8545-8ba573c7d289</ops:pluginId>
<ops:alertControlStates/>
<ops:alertStatuses/>
<ops:criticalities/>
<ops:resourceKindFilter>
<ops:resourceKind>Windows</ops:resourceKind>
<ops:adapterKind>EP Ops Adapter</ops:adapterKind>
</ops:resourceKindFilter>
<ops:alertDefinitionIdFilters>
<ops:values>AlertDefinition-EP Ops
Adapter-Alert-system-availability-Windows</ops:values>
</ops:alertDefinitionIdFilters>
<ops:properties name="emailaddr">vrops@corp.local</ops:properties>
</ops:notification-rule>

As described before, make sure to replace the following properties with your own values: “pluginId”, “values” property under the alertDefinitionIdFilters XML element, resourceKind, adapterKind and emailaddr.

As a result of the REST API call you will get an alert created in vROps:

For every other alert you can keep the plugin ID and email address the same and update only the alert definition, resouce kind and adapter kind.

Conclusion

By using the same REST call and changing properties for each alert accordingly, I was able to finish the job much quicker and avoided hours of pain of clicking through the GUI. As long as you have a REST API endpoint to work with, the same approach can be applied to any repetitive task.

If you’d like to learn more, make sure to check out VMware {code} project here for more information about VMware product APIs and SDKs.

Advertisements

Scripted CIFS Shares Migration

March 8, 2018

I don’t usually blog about Windows Server and Microsoft products in general, but the need for file server migration comes up in my work quite frequently, so I thought I’d make a quick post on that topic.

There are many use cases, it can be migration from a NAS storage array to a Windows Server or between an on-premises file server and cloud. Every such migration involves copying data and recreating shares. Doing it manually is almost impossible, unless you have only a handful of shares. If you want to replicate all NTFS and share-level permissions consistently from source to destination, scripting is almost the only way to go.

Copying data

I’m sure there are plenty of tools that can perform this task accurately and efficiently. But if you don’t have any special requirements, such as data at transit encryption, Robocopy is probably the simplest tool to use. It comes with every Windows Server installation and starting from Windows Server 2008 supports multithreading.

Below are the command line options I use:

robocopy \\file_server\source_folder D:\destination_folder /E /ZB /DCOPY:T /COPYALL /R:1 /W:1 /V /TEE /MT:128 /XD “System Volume Information” /LOG:D:\robocopy.log

Most of them are common, but there are a few worth pointing out:

  • /MT – use multithreading, 8 threads per Robocopy process by default. If you’re dealing with lots of small files, this can significantly improve performance.
  • /R:1 and /W:1 – Robocopy doesn’t copy locked files to avoid data inconsistencies. Default behaviour is to keep retrying until the file is unlocked. It’s important for the final data synchronisation, but for data seeding I recommend one retry and one second wait to avoid unnecessary delays.
  • /COPYALL and /DCOPY:T will copy all file and directory attributes, permissions, as well as timestamps.
  • /XD “System Volume Information” is useful if you’re copying an entire volume. If you don’t exclude the System Volume Information folder, you may end up copying deduplication and DFSR data, which in addition to wasting disk space, will break these features on the destination server.

Robocopy is typically scheduled to run at certain times of the day, preferably after hours. You can put it in a batch script and schedule using Windows Scheduler. Just keep in mind that if you specify the job to stop after running for a certain amount of hours, Windows Scheduler will stop only the batch script, but the Robocopy process will keep running. As a workaround, you can schedule another job with the following command to kill all Robocopy processes at a certain time of the day, say 6am in the morning:

taskkill /f /im robocopy.exe

Duplicating shares

For copying CIFS shares I’ve been using “sharedup” utility from EMC’s “CIFS Tools” collection. To get the tool, register a free account on https://support.emc.com. You can do that even if you’re not an EMC customer and don’t own an EMC storage array. From there you will be able to search for and download CIFT Tools.

If your source and destination file servers are completely identical, you can use sharedup to duplicate CIFS shares in one command. But it’s rarely the case. Often you want to exclude some of the shares or change paths if your disk drives or directory structure have changed. Sharedup supports input and output file command line options. You can generate a shares list first, which you can edit and then import shares to the destination file server.

To generate the list of shares first run:

sharedup \\source_server \\destination_server ALL /SD /LU /FO:D:\shares.txt /LOG:D:\sharedup.log

Resulting file will have records similar to this:

#
@Drive:E
:Projects ;Projects ;C:\Projects;
#
@Drive:F
:Home;Home;C:\Home;

Delete shares you don’t want to migrate and update target path from C:\ to where your data actually is. Don’t change “@Drive:E” headers, they specify location of the source share, not destination. Also worth noting that you won’t see permissions listed anywhere in this file. This file lists shares and share paths only, permissions are checked and copied at runtime.

Once you’re happy with the list, use the following command to import shares to the destination file server:

sharedup \\source_server \\destination_server ALL /R /SD /LU /FI:D:\shares.txt /LOG:D:\sharedup.log

For server local users and groups, sharedup will check if they exist on destination. So if you run into an error similar to the following, make sure to first create those groups on the destination file server:

10:13:07 : WARNING : The local groups “WinRMRemoteWMIUsers__” and “source_server_WinRMRemoteWMIUsers__” do not exist on the \\destination_server server !
10:13:09 : WARNING : Please use lgdup utility to duplicate the missing local user(s) or group(s) from \\source_server to \\destination_server.
10:13:09 : WARNING : Unable to initialize the Security Descriptor translator

Conclusion

I created this post as a personal howto note, but I’d love to hear if it’s helped anyone else. Or if you have better tool suggestions to accomplish this task, please let me know!

vRealize Automation Disaster Recovery

January 14, 2018

Introduction

VMware has invested a lot of time and effort in vRealize Automation high availability. For medium and large deployment scenarios VMware recommends using a load balancer (Citrix, F5, NSX) to distribute traffic between vRA appliance and infrastructure components, as well as database clustering (such as MS SQL availability groups) for database high availability. Additionally, in vRA 7.3 VMware added support for automatic failover of vRA appliance’s embedded PostgreSQL database, which was a manual process prior to that.

There is a clear distinction, however, between high availability and disaster recovery. Generally speaking, HA covers redundancy within the site and is not intended to protect from full site failure. Site Recovery Manager (or another replication product) is required to protect vRA in a DR scenario, which is described in more detail in the following document:

In my opinion, there are two important aspects that are missing from the aforementioned document, which I want to cover in this blog post: restoring VM UUIDs and changing vRA IP address. I will cover them in the order that these tasks would usually be performed if you were to fail over vRA to DR:

  1. Exporting VM UUIDs
  2. Changing IP addresses
  3. Importing VM UUIDs

I will also only touch on how to change VM reservations. Which is also an important step, but very well covered in VMware documentation already.

Note: this blog post does not provide configuration guidelines for VM replication software, such as Site Recovery Manager, Zerto or RecoverPoint and is focused only on DR aspects related to vRA itself. Refer to official documentation of corresponding products to determine how to set up VM replication to your disaster recovery site.

Exporting VM UUIDs

VMware uses two UUIDs to identify a VM. BIOS UUID (uuid.bios in .vmx file) was the original VM identifier implemented to identify a VM and is derived from the hardware VM is provisioned on. But it’s not unique. If VM is cloned, the clone will have the same BIOS UUID. So the second identifier was introduced called Instance UUID (vc.uuid in .vmx file), which is generated by vCenter and is unique within a single vCenter (two VMs in different vCenters can have the same Instance UUID).

When VMs are failed over, Instance UUIDs change. Compare VirtualMachine.Admin.AgentID (Instance UUID) and VirtualMachine.Admin.UUID (BIOS UUID) on original and failed over VMs.

Why does this matter? Because vRA uses Instance UUIDs to keep track of managed VMs.  If Instance UUIDs change, vRA will show the corresponding VMs as missing under Infrastructure > Managed Machines. And you won’t be able to manage them.

So it’s important to export VM Instance UUIDs before failover, which can then be used to restore the original values. This is how you can get the Instance UUID of a given VM using PowerCLI:

> (Get-VM vm_name).extensiondata.config.InstanceUUID

Here, on my GitHub page, you can find a script that I have put together to export Instance UUIDs of all VMs in CSV format.

Changing IP addresses

Once you’ve saved the Instance UUIDs, you can move on to failover. vRA components should be started in the following order:

  1. MS SQL database
  2. vRA appliance
  3. IaaS server

If network subnets, that all components are connected to, are stretched between two sites, when VMs are brought up at DR, there are no additional reconfiguration required. But usually it’s not the case and servers need to be re-IP’ed. IaaS server network setting are changed the same as on any other Windows server machine.

vRealize Appliance network settings are changed in vRA appliance management interface, that can be accessed at https://vra-appliance-hostname:5480, under Network > Address tab. The problem is, if IP addresses change at DR, it will be challenging to reach vRA appliance over the network. To work around that, connect to vRA VM console and run the following script from CLI to change appliance’s network settings:

# /opt/vmware/share/vami/vami_config_net

Don’t forget to update the DNS record for vRA appliance in DNS. For IaaS server it’s not needed, as long as you allow Dynamic DNS (DDNS) updates.

Importing VM UUIDs

After the failover all of your VMs will have missing status in vRA. To make vRA recognize failed over VMs you will need to revert Instance UUIDs back to the original values. In PowerCLI this can be done in the following way:

> $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
> $spec.instanceUuid = ’52da9b14-0060-dc51-4733-3b01e912edd2′
> $vm = Get-VM -Name vm_name
> $vm.Extensiondata.ReconfigVM_Task($spec)

I’ve written another script, that will perform this task for you, which you can find on my GitHub page.

You will need two files to make the script work. The vm_vc_uuids.csv file you generated before, with the list of original VM Instance UUIDs. As well as the list of missing VMs in CSV format, that you can export from vRA after the failover on the Infrastructure > Managed Machines page:

This is an example of the script command line options and the output:

You will need to run an inventory data collection from the Infrastructure > Compute Resources > Compute Resources page. vRA will discover VMs and update their status to “On”.

Updating reservations

If you try to run any Day 2 operation on a VM with the old reservation in place, you will get an error similar to this:

Error processing [Shutdown], error details:
Error getting property ‘runtime.powerState’ from managed object (null)
Inner Exception: Object reference not set to an instance of an object.

To manually update VM reservation, on Infrastructure > Managed Machines page hover over the VM and select Change Reservation:

This process is obviously not scalable, as it can take hours, if you have hundreds of VMs. VMware offers an alternative solution that lets you update all VMs by using Bulk Import feature available from Infrastructure > Administration > Bulk Imports. The idea is that you can export all VM configuration details in a CSV file, update compute and storage reservation columns and import back to vRA. vRealize Suite 7.0 Disaster Recovery by Using Site Recovery Manager 6.1 gives very detailed instruction on how to do that in “Bulk Import, Update, or Migrate Virtual Machines” section.

Conclusion

I hope this blog post helped to cover some gaps in VMware documentation. If you have any questions or comments, as always, feel free to leave them in the comments sections below.

References

[SOLVED] Migrating vCenter Notifications

January 6, 2018

Why is this a problem?

VMware upgrades and migrations still comprise a large chunk of what I do in my job. If it is an in-place upgrade it is often more straightforward. The main consideration is making sure all compatibility checks are made. But if it is a rebuild, things get a bit more complicated.

Take for example a vCenter Server to vCenter Server Appliance migration. If you are migrating between 5.5, 6.0 and 6.5 you are covered by the vCenter Server Migration Tool. Recently I came across a customer using vSphere 5.1 (yes, it is not as uncommon as you might think). vCenter Server Migration Tool does not support migration from vSphere 5.1, which is fair enough, as it is end of support was August 2016. But as a result, you end up being on your own with your upgrade endeavours and have to do a lot of the things manually. One of such things is migrating vCenter notifications.

You can go and do it by hand. Using a few PowerCLI commands you can list the currently configured notifications and then recreate them on the new vCenter. But knowing how clunky and slow this process is, I doubt you are looking forward to spend half a day configuring each of the dozens notifications one by one by hand (I sure am not).

I offer an easy solution

You may have seen a comic over on xkcd called “Is It Worth The Time?“. Which gives you an estimate of how long you can work on making a routine task more efficient before you are spending more time than you save (across five years). As an example, if you can save one hour by automating a task that you do monthly, even if you spend two days on automating it, you will still brake even in five years.

Knowing how often I do VMware upgrades, it is well worth for me to invest time in automating it by scripting. Since you do not do upgrades that often, for you it is not, so I wrote this script for you.

If you simply want to get the job done, you can go ahead and download it from my GitHub page here (you will also need VMware PowerCLI installed on your machine for it to work) and then run it like so:

.\copy-vcenter-alerts-v1.0.ps1 -SourceVcenter old-vc.acme.com -DestinationVcenter new-vc.acme.com

Script includes help topics, that you can view by running the following command:

Get-Help -full .\copy-vcenter-alerts-v1.0.ps1

Or if you are curious, you can read further to better understand how script works.

How does this work?

First of all, it is important to understand the terminology used in vSphere:

  • Alarm trigger – a set of conditions that must be met for an alarm warning and alert to occur.
  • Alarm action – operations that occur in response to triggered alarms. For example, email notifications.

Script takes source and destination vCenter IP addresses or host names as parameters and starts by retrieving the list of existing alerts. Then it compares alert definitions and if alert doesn’t exist on the destination, it will be skipped, so be aware of that. Script will show you a warning and you will be able to make a decision about what to do with such alert later.

Then for each of the source alerts, that exists on the destination, script recreates actions, with exact same triggers. Trigger settings, such as repeats (enabled/disabled) and trigger state changes (green to yellow, yellow to red, etc) are also copied.

Script will not attempt to recreate an action that already exists, so feel free to run the script multiple times, if you need to.

What script does not do

  1. Script does not copy custom alerts – if you have custom alert definitions, you will have to recreate them manually. It was not worth investing time in such feature at this stage, as custom alerts are rare and even if encountered, there us just a handful, that can be moved manually.
  2. Only email notification actions are supported – because they are the most common. If you use other actions, like SNMP traps, let me know and maybe I will include them in the next version.

PowerCLI cmdlets used

These are some of the useful VMware PowerCLI cmdlets I used to write the script:

  • Get-AlarmDefinition
  • Get-AlarmAction
  • Get-AlarmActionTrigger
  • New-AlarmAction
  • New-AlarmActionTrigger

Error When Deploying VCSA or PSC

October 31, 2017

Recently when helping a customer to deploy a new greenfield VMware 6.5 environment I ran into an issue where brand new vCenter Server Appliance and Platform Service Controller 6.5 build 5973321 fail to deploy to an ESXi host build 5969303.

Stage 1 (install) of the deployment completes successfully. In Stage 2 (setup) VCSA installer both for vCenter and PSC first shows a prompt asking for credentials.

PSC Issue Description

After providing credentials, when installing an external PSC, installation fails with the following error:

Error:
Unable to connect to vCenter Single Sign-On: Failed to connect to SSO; uri:https://psc-hostname/sts/STSService/vsphere.local
Failed to register vAPI Endpoint Service with CM
Failed to configure vAPI Endpoint Service at the firstboot time

Resolution:
Please file a bug against VAPI

Installation wizard shows the following resulting error:

Failure:
A problem occurred during setup. Refresh this page and try again.

A problem occurred during setup. Services might not be working as expected.

A problem occurred while – Starting VMware vAPI Endpoint…

Appliance shows the following error in console:

Failed to start services. Firstboot Error.

Alternatively PSC can fail with the following error:

Error:
Unexpected failure: }
Failed to register vAPI Endpoint Service with CM
Failed to configure vAPI Endpoint Service at the firstboot time

Resolution:
Please file a bug against VAPI

VCSA Issue Description

After providing credentials, when installing vCenter with embedded PSC, installation fails with the following error:

Error:
Unable to start the Service Control Agent.

Resolution:
Search for these symptoms in the VMware knowledge base for any known issues and possible workarounds. If none can be found, collect a support bundle and open a support request.

Installation wizard shows the following resulting error:

Failure:
A problem occurred during setup. Refresh this page and try again.

A problem occurred during setup. Services might not be working as expected.

A problem occurred while – Starting VMware Service Control Agent…

Appliance shows the same error in console.

Alternatively VCSA can fail with the following error:

Error:
Encountered an internal error. Traceback (most recent call last): File “/usr/lib/vmidentity/firstboot/vmidentity-firstboot.py”, line 1852, in main vmidentityFB.boot() File “/usr/lib/vmidentity/firstboot/vmidentity-firstboot.py”, line 359, in boot self.checkSTS(self.__stsRetryCount, self.__stsRetryInterval) File “/usr/lib/vmidentity/firstboot/vmidentity-firstboot.py”, line 1406, in checkSTS raise Exception(‘Failed to initialize Secure Token Server.’) Exception: Failed to initialize Secure Token Server.

Resolution:
This is an unrecoverable error, please retry install. If you run into this error again, please collect a support bundle and open a support request.

Issue Workaround

This issue happens when VCSA or PSC installation was cancelled and is attempted for the second time to the same ESXi host.

Identified workaround for this issue is to use another ESXi host, which has never been used to deploy PSC or VCSA to.

Issue Resolution

VMware is aware of the bug and working on the resolution.

Beginner’s Guide to HPE 5000 Series Switches

October 14, 2017

I don’t closely track the popularity of my blog. If what I share helps people in their day to day job, it’s already good enough to me. But I do look at site statistics now and then just out of curiosity and it seems that network-related posts get a lot of popularity. A blog post I wrote a while ago on Dell N4000 switches has quickly got in top five over the last year.

So it seems that there is a demand for entry-level switch configuration guides. I’ve worked with a quite a few different switch brands over the years, so I thought I will build on the success of the Dell blog post and this time write about HPE FlexNetwork/FlexFabric 5000 switch series.

Operating Systems

HPE has several network switch product lines. I won’t even try to cover all of them in this post. But it’s important to know that there are a few different operating systems you can encounter, while working with HPE network switches. There is a familiar ProCurve product portfolio (now merged with Aruba), which is based on ProVision operating system.

HPE FlexNetwork/FlexFabric 5000 series, on the other hand, is based on Comware operating system. It has a different CLI command set and can be a complete surprise if you’ve worked only with ProCurve switches before. So this blog post will be particularly valuable for those who’re dealing with HPE 5000 for the first time.

The following guide has been tested on a pair of HPE FlexFabric 5700-series switches. Even though commands are mostly the same, on other switch series, like FlexNetwork 5800, there might be some minor differences.

Initial Configuration

When the switch is booted for the first time it will start automatic configuration by trying to obtain settings over DHCP, which you can interrupt by Ctrl+C to get straight to CLI.

You start in user view where you can run display commands to review switch settings. To start the configuration, change to system view:

> system-view

Let’s start by configuring remote access to the switch. There are two ways you can do that. You either use the out-of-band management port:

> interface M-GigabitEthernet 0/0/0
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Or you can configure a VLAN interface IP address:

> interface vlan-interface 1
> ip address 10.10.10.10 255.255.255.0
> ip route-static 0.0.0.0 0.0.0.0 10.10.10.1

Then configure switch name, enable SSH, set passwords and you can start managing the switch over SSH:

> sysname switchname

> public-key local create rsa
> ssh server enable
> user-interface vty 0 15
> authentication-mode scheme
> protocol inbound ssh

> super password simple yourpassword
> local-user admin
> password simple yourpassword
> authorization-attribute user-role level-0
> service-type ssh

User “admin” will have an unprivileged role. You will need to run the following command and enter password once logged in, to elevate to network admin rights:

> super

Intelligent Resilient Framework

In small non-business-critical environments one standalone switch is usually sufficient. In larger environments switches are typically deployed in pairs for redundancy. To simplify management and to avoid network loops most switches support some sort of MLAG or stacking. IRF is HPE’s version of it.

Determine what ports you’re going to use for IRF. There are two QSFP+ ports on 5700-series dedicated for it. And then on on the first switch (master) run the following commands (it’s recommended to shut down the ports before you set them up as IRF):

> irf member 1 priority 32
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> shutdown
> irf-port 1/1
> port group interface FortyGigE 1/0/41
> irf-port 1/2
> port group interface FortyGigE 1/0/42
> int range FortyGigE 1/0/41 to FortyGigE 1/0/42
> undo shut
> save
> irf-port-configuration active

On the second switch (slave) run the following commands to change the IRF ID to 2:

> irf member 1 renumber 2
> reboot

When the switch comes up, configure IRF ports:

> irf member 2 priority 30
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> shutdown
> irf-port 2/1
> port group interface FortyGigE 2/0/41
> irf-port 2/2
> port group interface FortyGigE 2/0/42
> int range FortyGigE 2/0/41 to FortyGigE 2/0/42
> undo shut
> save
> irf-port-configuration active

Now you can connect the physical IRF ports. IRF is a ring topology, that means (in my case) port 1/0/41 should connect to 2/0/42 and port 1/0/42 should connect to 2/0/41.

Second switch will automatically reboot and if all is configured correctly, you should see both switches join the IRF fabric. Member switch 1 has the highest priority of 32 and becomes the master:

> display irf

Firmware Upgrade

Firmware upgrade is the next logical step after you set up IRF. The latest firmware revision for the switches can be download from HPE web-site. Keep in mind you will need a HPE passport account, with a valid service agreement (SAID) added to it.

You will also need a TFTP server to upgrade the firmware. There are a few of them out there, but the most commonly used is probably Tftpd64.

When you get the TFTP server up and running and copy the firmware file to it, perform an upgrade:

> tftp 10.10.10.20 get 5700-CMW710-R2432P03.ipe
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 1 main
> boot-loader file flash:/5700-CMW710-R2432P03.ipe slot 2 main
> irf auto-update enable
> reboot

Confirm firmware has been updated:

> display version

VLANs, Aggregation Groups and Tagging

In Comware the term “aggregation group” is used to describe what is a “port channel” in Cisco world. Trunk/access ports are also called tagged/untagged ports throughout the documentation.

In this section we will discuss a few common port configuration scenarios:

  • Untagged ports, which can be your iSCSI storage array ports
  • Tagged ports, such as your VMware host uplinks
  • Aggregation groups, typically used for LAGs to upstream switches

First of all create all VLANs and give them descriptions:

> vlan 10
> description iSCSI
> vlan 20
> description Server
> vlan 30
> description Dev and test

Then specify untagged ports:

> vlan 10
> port te 1/0/1
> port te 2/0/1

To configure tagged ports and allow certain VLANs (ports will be added to the VLANs automatically):

> int te 1/0/2
> description ESX01 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30
> int te 2/0/2
> description ESX02 vmnic0
> port link-type trunk
> port trunk permit vlan 20 30

And to create an LACP aggregation group:

> interface bridge-aggregation 1
> description Trunk to upstream switch
> link-aggregation mode dynamic
> port link-type trunk
> port trunk permit vlan 20 30

> interface te 1/0/3
> port link-aggregation group 1
> interface te 2/0/3
> port link-aggregation group 1

Common Commands

Other useful commands that don’t fall under any specific category, but handy to know.

Display switch configuration:

> display current-configuration

Save switch configuration:

> save

Shut down a port:

> int te 1/0/27
> shutdown

Undo a command:

> undo shutdown

Conclusion

Whether you are a network engineer new to the Comware operating system or a VMware administrator looking for a quick cheat sheet for FlexNetwork/FlexFabric switches, I hope this guide has helped you get the job done.

If this blog post gets the same amount of popularity, maybe it will turn into another series. But for now – over and out.

Quick Way to Migrate VMs Between Standalone ESXi Hosts

September 26, 2017

Introduction

Since vSphere 5.1, VMware offers an easy migration path for VMs running on hosts managed by a vCenter. Using Enhanced vMotion available in Web Client, VMs can be migrated between hosts, even if they don’t have shared datastores. In vSphere 6.0 cross vCenter vMotion(xVC-vMotion) was introduced, which no longer requires you to even have old and new hosts be managed by the same vCenter.

But what if you don’t have a vCenter and you need to move VMs between standalone ESXi hosts? There are many tools that can do that. You can use V2V conversion in VMware Converter or replication feature of the free version of Veeam Backup and Replication. But probably the easiest tool to use is OVF Tool.

Tool Overview

OVF Tool has been around since Open Virtualization Format (OVF) was originally published in 2008. It’s constantly being updated and the latest version 4.2.0 supports vSphere up to version 6.5. The only downside of the tool is it can export only shut down VMs. It’s may cause problems for big VMs that take long time to export, but for small VMs the tool is priceless.

Installation

OVF Tool is a CLI tool that is distributed as an MSI installer and can be downloaded from VMware web site. One important thing to remember is that when you’re migrating VMs, OVF Tool is in the data path. So make sure you install the tool as close to the workload as possible, to guarantee the best throughput possible.

Usage Examples

After the tool is installed, open Windows command line and change into the tool installation directory. Below are three examples of the most common use cases: export, import and migration.

Exporting VM as an OVF image:

> ovftool “vi://username:password@source_host/vm_name” “vm_name.ovf”

Importing VM from an OVF image:

> ovftool -ds=”destination_datastore” “vm_name.ovf” “vi://username:password@destination_host”

Migrating VM between ESXi hosts:

> ovftool -ds=”destination_datastore” “vi://username:password@source_host/vm_name” “vi://username:password@destination_host”

When you are migrating, machine the tool is running on is still used as a proxy between two hosts, the only difference is you are not saving the OVF image to disk and don’t need disk space available on the proxy.

This is what it looks like in vSphere and HTML5 clients’ task lists:

Observations

When planning migrations using OVF Tool, throughput is an important consideration, because migration requires downtime.

OVF Tool is quite efficient in how it does export/import. Even for thick provisioned disks it reads only the consumed portion of the .vmdk. On top of that, generated OVF package is compressed.

Due to compression, OVF Tool is typically bound by the speed of ESXi host’s CPU. In the screenshot below you can see how export process takes 1 out of 2 CPU cores (compression is singe-threaded).

While testing on a 2 core Intel i5, I was getting 25MB/s read rate from disk and an average export throughput of 15MB/s, which is roughly equal to 1.6:1 compression ratio.

For a VM with a 100GB disk, that has 20GB of space consumed, this will take 20*1024/25 = 819 seconds or about 14 minutes, which is not bad if you ask me. On a Xeon CPU I expect throughput to be even higher.

Caveats

There are a few issues that you can potentially run into that are well-known, but I think are still worth mentioning here.

Special characters in URIs (string starting with vi://) must be escaped. Use % followed by the character HEX code. You can find character HEX codes here: http://www.techdictionary.com/ascii.html.

For example use “vi://root:P%40ssword@10.0.1.10”, instead of “vi://root:P@ssword@10.0.1.10” or you can get confusing errors similar to this:

Error: Could not lookup host: root

Disconnect ISO images from VMs before migrating them or you will get the following error:

Error: A general system error occurred: vim.fault.FileNotFound

Conclusion

OVF Tool requires downtime when exporting, importing or migrating VMs, which can be a deal-breaker for large scale migrations. When downtime is not a concern or for VMs that are small enough for the outage to be minimal, from now on OVF Tool will be my migration tool of choice.

Extracting vRealize Operations Data Using REST API

September 17, 2017

Scripting today is an important skill if you’re a part of IT operations team. It is common to use PowerShell or any other scripting language of your choice to automate repetitive tasks and be efficient in what you do. Another use case for scripting and automation, which is often missed, is the fact that they let you do more. Public APIs offered by many software and hardware solutions let you manipulate their data and call functions in the way you need, without being bound by the workflows provided in GUI.

Recently I was asked to extract data from vRealize Operations Manager that was not available in GUI or a report in the format I needed. At first it looked like a non-trivial task as it required scripting and using REST APIs to pull the data. But after some research it turned out to be much easier than I thought.

Using Python this can be done in a few lines of code using existing Python libraries that do most of the work for you. The goal of this blog post is to show that scripting does not have to be hard and using the right tools for the right job you can get things done in a matter of minutes, not hours or days.

Scenario

To demonstrate an example of using vRealize Operations Manager REST APIs we will retrieve the list of vROps adapters, which vROps uses to pull information from many hardware and software solutions it supports, such as Nimble Storage or Microsoft SQL Server.

vROps APIs are obviously much more powerful than that and you can use the same approach to pull other information such as: active and inactive alerts, performance statistics, recommendations. Full vROps API documentation can be found at https://your-vrops-hostname/suite-api/.

Install Python and Libraries

We will be using two Python libraries: “Requests” to make REST calls and “ElementTree” for XML parsing. ElementTree comes with Python, so we will need to install the Requests package only.

I already made a post here on how to install Python interpreter and Python libraries, so we will dive right into vROps APIs.

Retrieve the List of vROps Adapters

To get the list of all installed vROps adapters we need to make a GET REST call using the “get” method from Requests library:

import requests
from requests.auth import HTTPBasicAuth

akUrl = 'https://vrops/suite-api/api/adapterkinds'
ak = requests.get(akUrl, auth=HTTPBasicAuth('user', 'pass'))

In this code snippet using the “import” command we specify that we are using Requests library, as well as its implementation of basic HTTP authentication. Then we request the list of vROps adapters using the “get” method from Request library, and save the XML response into the “ak” variable. Add “verify=False” to the list of the get call parameters if you struggle with SSL certificate issues.

As a result you will get the full list of vROps adapters in the format similar to the following. So how do we navigate that? Using ElementTree XML library.

Parsing XML Response Sequentially

vRealize Operations Manager returns REST API responses in XML format. ElementTree lets you parse these XML responses to find the data you need, which you can output in a human-readable format, such as CSV and then import into an Excel spreadsheet.

Parsing XML tree requires traversing from top to bottom. You start from the root element:

import xml.etree.ElementTree as ET

akRoot = ET.fromstring(ak.content)

Then you can continue by iterating through child elements using nested loops:

for adapter in akRoot:
  print adapter.tag, adapter.attrib['key']
    for adapterProperty in adapter:
      print adapterProperty.name, adapterProperty.text

Childs of <ops:adapter-kinds> are <ops:adapter-kind> elements. Childs of <ops:adapter-kind> elements are <ops:name>, <ops:adapterKindType>, <ops:describeVersion> and <ops:resourceKinds>. So the output of the above code will be:

adapter-kind CITRIXNETSCALER_ADAPTER
name Citrix NetScaler Adapter
adapterKindType GENERAL
describeVersion 1
resourceKinds citrix_netscaler_adapter_instance
resourceKinds appliance
…

As you could’ve already noticed, all XML elements have tags and can additionally have attributes and associated text. From above example:

  • Tags: adapter-kind, name, adapterKindType
  • Attribute: key
  • Text: Citrix NetScaler Adapter, GENERAL, 1

Finding Interesting Elements

Typically you are looking for specific information and don’t need to traverse the whole XML tree. So instead of walking through the tree sequentially, you can iterate trough interesting elements using the “iterfind” method. For instance if we are looking only for adapter names, the code would look as the following:

ns = {'vrops': 'http://webservice.vmware.com/vRealizeOpsMgr/1.0/'}
for akItem in akRoot.iterfind('vrops:adapter-kind', ns):
  akNameItem = akItem.find('vrops:name', ns)
  print akNameItem.text

All elements in REST API responses are usually prefixed with a namespace. To avoid using the long XML element names, such as http://webservice.vmware.com/vRealizeOpsMgr/1.0/adapter-kind, ElementTree methods support using namespaces, that can be then passed as a variable, as the “ns” variable in this code snippet.

Resulting output will be similar to:

Citrix NetScaler Adapter
Container
Dell EMC PowerEdge
Dell Storage Adapter
EP Ops Adapter
F5 BIG-IP Adapter
HP Servers Adapter

Additional Information

I intentionally tried to keep this post short to give you all information required to start using Python to parse REST API responses in XML format.

I have written two scripts that are more practical and shared them on my GitHub page here:

  • vrops_object_types_1.0.py – extracts adapters, object types and number of objects. Script gives you an idea of what is actually being monitored in vROps, by providing the number of objects you have in your vROps instance for each adapter and object type.
  • vrops_alert_definitions_1.0.py – extracts adapters, object types, alert names, criticality and impact. As opposed to the first script, this script provides the list of alerts for each adapter and object type, which is helpful to identify potential alerts that can be triggered in vROps.

Feel free to download these scripts from GitHub and play with them or adapt them according to your needs.

Helpful Links

Python for Windows: Quick Installation

September 7, 2017

Only recently I added Python to the list of tools I use in my job. I had always used PowerShell if I needed to script something, until I saw how easy Python is to use. I will be keeping it in my arsenal from now on.

In near future I plan to make a blog post on how to use Python with REST APIs and in this blog post I wanted to provide quick instructions on how to install Python in Windows, that I can later use as a reference.

Installing Python

Latest version of Python for Windows can be downloaded from https://www.python.org/downloads/windows/. Executable installer is probably the easiest. When installing, make sure to check “Add Python 3.6 to PATH” option, it makes life much easier.

Installing Libraries

Python installation already includes lots of libraries that you can use for scripting. ElementTree library, for example, which is used for XML parsing, comes with the interpreter.

Depending on what you want to use Python for, you may need to install additional libraries. For instance, if you want to call REST APIs, you may need Requests – library for HTTP cals.

Python uses a package manager called “pip”. If it is not already in your PATH variable, find pip under Python installation directory and run from command line as administrator:

pip install requests

Once the library is installed you can use it by importing it into your scripts:

import requests

Writing Code

At this point you can call Python interpreter in Windows command line and start running Python commands. If you want to write a script, however, you will need an IDE. Nothing wrong in using Notepad, but there are more efficient ways to do that.

Python for Windows comes with IDE, which is called simply IDLE. It is very basic, but it provides all essential features, such as as code completion, syntax highlighting and a primitive debugger. It is not perfect but it has everything to get you started.

Conclusion

That is a quick crash course with three simple steps to get Python up and running. I tried to keep it short to demonstrate how you can start using Python with minimum effort.

AWS Cloud Protection Manager Part 3: Backup and Restore

August 21, 2017

Backup

Backups are created according to the schedule specified in the backup policy. We discussed how to configure backup policies in the previous blog post of the series. The list of backups you see on the Backup Monitor tab are your restore points. Backups that are older then the specified retention policy will be purged from the list and you will not see them there, unless you move them to “Freezer”.

It is important to understand that apart from volume snapshots, for each backed up instance CPM also creates an AMI. Those who has hands-on experience with AWS may already know, that AMIs is the only way to create clones of Windows EC2 instances in AWS. If you go to AWS console and try to find a clone action under the instance Action menu, you won’t find any. You will have “Create Image” instead. It creates an AMI, from which you can then spin up a clone of an instance the image was created from.

CPM does exactly that. For each backup policy the instance is under, it creates one AMI. In our example we have four backup policies, that will result in four AMIs for each of the instances. Every AMI has to have at least one storage volume. So CPM will include the root volume of each instance into AMI, just because it has to. But AMIs are required only to restore EC2 instance configuration. Data is restored from volume snapshots, that can be used to create new volumes from them and then attach them to the instance. You can click on the View button under Snapshots to find the corresponding snapshot and AMI IDs.

There is a backup log for each job run as well that is helpful for issue troubleshooting.

Restore

To perform a restore click on the Recover button next to the backup job and you will get the list of the instances you can recover. CPM offers you three options: instance recovery, volume recovery and file recovery. Let’s go back to front.

File recovery is probably the most used recovery option. As it lets you restore individual files. When you click on the “Explore” button, CPM creates new volumes from the snapshots you are restoring from and mount them to the CPM instance. You are then presented with a simple file system browser where you can find the file and click on the green down arrow icon in Download column to save the file to your computer.

If you click on “Volume Only”, you can restore particular volumes. Restored volumes are not attached to any instance, unless you specify it under “Attach to Instance” column. You can then select under “Attach Behaviour” what CPM should do if such volume is already attached to the instance or if you want to automatically detach the original volume, but the instance is running (you can do it only if instance is stopped).

And the last option is “Instance”. It will create a clone of the original instance using the pre-generated AMI and volume snapshots, as we discussed in the Backup section of this blog post. You can specify many options under Advanced Options section, including recovery to another VPC or different availability zone. If anything, make sure you specify a new IP address for the instance, otherwise you’ll have a conflict and your restore will fail. Ideally you should also shut down the original EC2 instance before spinning up a restore clone.

Advanced Features

There are quite a few worth mentioning. So far we have looked at simple EC2 instance restore. But you don’t have to backup whole instances, you can also backup individual volumes. On top of that, CPM supports RDS database, Aurora and Redshift cluster backups.

If you run MS Exchange, Sharepoint or SQL on your EC2 instances, you can install CPM backup agent on them to ensure you have application-consistent backups via VSS, as opposed to crash-consistent backups you get if agent is not used. If you install the agent, you can also run a script on the instance before and after the backup is taken.

Last but not least is DR. Restoring to another availability zone within the region is already supported on instance recovery level. You can choose availability zone you want to restore to. It is not possible to recover to another region, though. Because AWS snapshots and AMIs are local to the region they are created in. If you want to be able to recover to another region, you can configure DR in CPM, which will utilise AWS AMI and snapshot copy functionality to copy backups to another region at configured frequency.

Conclusion

Overall, I found Cloud Protection Manager very easy to install, configure and use. If you come from infrastructure background, at first glance CPM may look to you like a very basic tool, compared to such feature-rich solutions like Veeam or Commvault. But that feeling is misleading. CPM is simple, because AWS simple. All infrastructure complexity is hidden under the covers. As a result, all AWS backup tools need to do is create snapshots and CPM does it well.