Posts Tagged ‘export’

vSphere Host Profiles: Using Customization Files

February 15, 2020

Overview

If you own vSphere Enterprise Plus licences, using vSphere Host Profiles is a no brainer. Even if you rarely add ESXi hosts to your cluster, why configure them by hand if you can do that by a few mouse clicks in a fast and consistent manner.

Host profiles are usually created by setting up one ESXi host according to your requirements and then capturing its state. Some settings in a host profile are unique to each host, which include the host name, VMkernel adapter network settings, user name for joining host to AD, etc. When you apply your profile to a new unprepared host, vCenter will ask you to specify these settings. This step is called host customization.

You can either type these settings manually or if you want to take your automation game one step further, you can use a customization file, which is simply the list of setting in .csv format.

This feature was first introduced in vSphere 6.5 and official documentation is a bit light on this topic. Purpose of this post is to close this gap by demonstrating where to find this configuration option.

Create

To create a customization file, right click on a ESXi host and choose Host Profiles > Export Host Customizations. This host has to have host profile already applied to it (including all customization settings), otherwise this option will be grayed out. This can be the first host you used to capture the original host profile.

Open .csv file in your editor of choice and change settings accordingly. If you adding multiple hosts to your cluster, you can write a script to generate multiple copies of this file for each new ESXi host you’re adding.

Apply

Host customization settings are specified (manually or using a customization file) when host profile is being applied to the host. So first right click on the host and choose Host Profiles > Attach Host Profile. Then on Customize hosts page import customization file by clicking on the Browse button:

Note: If you hit the “Host settings validation failed” error after applying host customizations, read my blog article here that explains the problem.

Conclusion

Pretty simple, isn’t it? Key is to not forget that customization file can be specified either when you are applying host profile or, alternatively, you can skip host customization step and use Host Profiles > Edit Host Customizations later. For host that doesn’t have a host profile associated with it, Edit Host Customizations option will always be greyed out.

Advertisement

Connecting to PostgreSQL Database Backing VMware Products

August 19, 2019

Most of the VMware products these days are standardised on PostgreSQL. Yes, you can still deploy vCenter for Windows, for instance, and use MS SQL or Oracle as a back-end database, but it’s now deprecated and vSphere 6.7 is the last release where it’s supported. Other products, like vRealize Automation are moving in the same direction.

VCSA, vRA, vRO are all distributed as appliances and shouldn’t be modified in any way by the end user. But I’ve had times before when I needed to directly connect to the PostgreSQL database to better understand certain parts of the product. One of the recent examples was encryption in vRO. I needed to ensure that the passwords I save in SecureString attributes (the ones shown as asterisks) in my workflows are not kept as plain text in vRO. So let’s see how I validated this assumption by looking at the vRO database.

vRO Database

I first SSH’ed into the appliance and connected to the database using PostgreSQL interactive terminal:

# psql vmware postgres

I then listed all database table names:

> SELECT * FROM pg_catalog.pg_tables;

When I found the table I was looking for, I listed its contents:

> SELECT * FROM vmo_workflowcontent;

And simply searched for my attribute name in the output, which was encrypted indeed.

Exporting the Database

You won’t always know what table you’re looking for, so the easiest way to go about it is to simply export the whole database in plain text and use search in a text file:

# su -m -c “/opt/vmware/vpostgres/current/bin/pg_dump -Fp vmware > /tmp/vmware.sql” postgres

“-Fp” here is for plain text (default is custom format, which is compressed), “vmware” is the database and “postgres” is the user.

VCSA and vRA Databases

You will find that database names aren’t the same for different products, for instance vCenter’s database name is “VCDB” (capital letters) and vRA is “vcac” (username is also “vcac”). So if you need to connect to VCSA database you will use the following syntax:

# psql VCDB postgres

For vRA it will look like this:

# psql vcac vcac

Then you can use the same approach demonstrated for vRO to read table data or simply export the whole database.

Conclusion

I hope it helps you with your tinkering adventures. Just make sure to use this only for research and not change anything in the database, unless specifically advised by GSS.

Quick Way to Migrate VMs Between Standalone ESXi Hosts

September 26, 2017

Introduction

Since vSphere 5.1, VMware offers an easy migration path for VMs running on hosts managed by a vCenter. Using Enhanced vMotion available in Web Client, VMs can be migrated between hosts, even if they don’t have shared datastores. In vSphere 6.0 cross vCenter vMotion(xVC-vMotion) was introduced, which no longer requires you to even have old and new hosts be managed by the same vCenter.

But what if you don’t have a vCenter and you need to move VMs between standalone ESXi hosts? There are many tools that can do that. You can use V2V conversion in VMware Converter or replication feature of the free version of Veeam Backup and Replication. But probably the easiest tool to use is OVF Tool.

Tool Overview

OVF Tool has been around since Open Virtualization Format (OVF) was originally published in 2008. It’s constantly being updated and the latest version 4.2.0 supports vSphere up to version 6.5. The only downside of the tool is it can export only shut down VMs. It’s may cause problems for big VMs that take long time to export, but for small VMs the tool is priceless.

Installation

OVF Tool is a CLI tool that is distributed as an MSI installer and can be downloaded from VMware web site. One important thing to remember is that when you’re migrating VMs, OVF Tool is in the data path. So make sure you install the tool as close to the workload as possible, to guarantee the best throughput possible.

Usage Examples

After the tool is installed, open Windows command line and change into the tool installation directory. Below are three examples of the most common use cases: export, import and migration.

Exporting VM as an OVF image:

> ovftool “vi://username:password@source_host/vm_name” “vm_name.ovf”

Importing VM from an OVF image:

> ovftool -ds=”destination_datastore” “vm_name.ovf” “vi://username:password@destination_host”

Migrating VM between ESXi hosts:

> ovftool -ds=”destination_datastore” “vi://username:password@source_host/vm_name” “vi://username:password@destination_host”

When you are migrating, machine the tool is running on is still used as a proxy between two hosts, the only difference is you are not saving the OVF image to disk and don’t need disk space available on the proxy.

This is what it looks like in vSphere and HTML5 clients’ task lists:

Observations

When planning migrations using OVF Tool, throughput is an important consideration, because migration requires downtime.

OVF Tool is quite efficient in how it does export/import. Even for thick provisioned disks it reads only the consumed portion of the .vmdk. On top of that, generated OVF package is compressed.

Due to compression, OVF Tool is typically bound by the speed of ESXi host’s CPU. In the screenshot below you can see how export process takes 1 out of 2 CPU cores (compression is singe-threaded).

While testing on a 2 core Intel i5, I was getting 25MB/s read rate from disk and an average export throughput of 15MB/s, which is roughly equal to 1.6:1 compression ratio.

For a VM with a 100GB disk, that has 20GB of space consumed, this will take 20*1024/25 = 819 seconds or about 14 minutes, which is not bad if you ask me. On a Xeon CPU I expect throughput to be even higher.

Caveats

There are a few issues that you can potentially run into that are well-known, but I think are still worth mentioning here.

Special characters in URIs (string starting with vi://) must be escaped. Use % followed by the character HEX code. You can find character HEX codes here: http://www.techdictionary.com/ascii.html.

For example use “vi://root:P%40ssword@10.0.1.10”, instead of “vi://root:P@ssword@10.0.1.10” or you can get confusing errors similar to this:

Error: Could not lookup host: root

Disconnect ISO images from VMs before migrating them or you will get the following error:

Error: A general system error occurred: vim.fault.FileNotFound

Conclusion

OVF Tool requires downtime when exporting, importing or migrating VMs, which can be a deal-breaker for large scale migrations. When downtime is not a concern or for VMs that are small enough for the outage to be minimal, from now on OVF Tool will be my migration tool of choice.

Extracting vRealize Operations Data Using REST API

September 17, 2017

Scripting today is an important skill if you’re a part of IT operations team. It is common to use PowerShell or any other scripting language of your choice to automate repetitive tasks and be efficient in what you do. Another use case for scripting and automation, which is often missed, is the fact that they let you do more. Public APIs offered by many software and hardware solutions let you manipulate their data and call functions in the way you need, without being bound by the workflows provided in GUI.

Recently I was asked to extract data from vRealize Operations Manager that was not available in GUI or a report in the format I needed. At first it looked like a non-trivial task as it required scripting and using REST APIs to pull the data. But after some research it turned out to be much easier than I thought.

Using Python this can be done in a few lines of code using existing Python libraries that do most of the work for you. The goal of this blog post is to show that scripting does not have to be hard and using the right tools for the right job you can get things done in a matter of minutes, not hours or days.

Scenario

To demonstrate an example of using vRealize Operations Manager REST APIs we will retrieve the list of vROps adapters, which vROps uses to pull information from many hardware and software solutions it supports, such as Nimble Storage or Microsoft SQL Server.

vROps APIs are obviously much more powerful than that and you can use the same approach to pull other information such as: active and inactive alerts, performance statistics, recommendations. Full vROps API documentation can be found at https://your-vrops-hostname/suite-api/.

Install Python and Libraries

We will be using two Python libraries: “Requests” to make REST calls and “ElementTree” for XML parsing. ElementTree comes with Python, so we will need to install the Requests package only.

I already made a post here on how to install Python interpreter and Python libraries, so we will dive right into vROps APIs.

Retrieve the List of vROps Adapters

To get the list of all installed vROps adapters we need to make a GET REST call using the “get” method from Requests library:

import requests
from requests.auth import HTTPBasicAuth

akUrl = 'https://vrops/suite-api/api/adapterkinds'
ak = requests.get(akUrl, auth=HTTPBasicAuth('user', 'pass'))

In this code snippet using the “import” command we specify that we are using Requests library, as well as its implementation of basic HTTP authentication. Then we request the list of vROps adapters using the “get” method from Request library, and save the XML response into the “ak” variable. Add “verify=False” to the list of the get call parameters if you struggle with SSL certificate issues.

As a result you will get the full list of vROps adapters in the format similar to the following. So how do we navigate that? Using ElementTree XML library.

Parsing XML Response Sequentially

vRealize Operations Manager returns REST API responses in XML format. ElementTree lets you parse these XML responses to find the data you need, which you can output in a human-readable format, such as CSV and then import into an Excel spreadsheet.

Parsing XML tree requires traversing from top to bottom. You start from the root element:

import xml.etree.ElementTree as ET

akRoot = ET.fromstring(ak.content)

Then you can continue by iterating through child elements using nested loops:

for adapter in akRoot:
  print adapter.tag, adapter.attrib['key']
    for adapterProperty in adapter:
      print adapterProperty.name, adapterProperty.text

Childs of <ops:adapter-kinds> are <ops:adapter-kind> elements. Childs of <ops:adapter-kind> elements are <ops:name>, <ops:adapterKindType>, <ops:describeVersion> and <ops:resourceKinds>. So the output of the above code will be:

adapter-kind CITRIXNETSCALER_ADAPTER
name Citrix NetScaler Adapter
adapterKindType GENERAL
describeVersion 1
resourceKinds citrix_netscaler_adapter_instance
resourceKinds appliance
…

As you could’ve already noticed, all XML elements have tags and can additionally have attributes and associated text. From above example:

  • Tags: adapter-kind, name, adapterKindType
  • Attribute: key
  • Text: Citrix NetScaler Adapter, GENERAL, 1

Finding Interesting Elements

Typically you are looking for specific information and don’t need to traverse the whole XML tree. So instead of walking through the tree sequentially, you can iterate trough interesting elements using the “iterfind” method. For instance if we are looking only for adapter names, the code would look as the following:

ns = {'vrops': 'http://webservice.vmware.com/vRealizeOpsMgr/1.0/'}
for akItem in akRoot.iterfind('vrops:adapter-kind', ns):
  akNameItem = akItem.find('vrops:name', ns)
  print akNameItem.text

All elements in REST API responses are usually prefixed with a namespace. To avoid using the long XML element names, such as http://webservice.vmware.com/vRealizeOpsMgr/1.0/adapter-kind, ElementTree methods support using namespaces, that can be then passed as a variable, as the “ns” variable in this code snippet.

Resulting output will be similar to:

Citrix NetScaler Adapter
Container
Dell EMC PowerEdge
Dell Storage Adapter
EP Ops Adapter
F5 BIG-IP Adapter
HP Servers Adapter

Additional Information

I intentionally tried to keep this post short to give you all information required to start using Python to parse REST API responses in XML format.

I have written two scripts that are more practical and shared them on my GitHub page here:

  • vrops_object_types_1.0.py – extracts adapters, object types and number of objects. Script gives you an idea of what is actually being monitored in vROps, by providing the number of objects you have in your vROps instance for each adapter and object type.
  • vrops_alert_definitions_1.0.py – extracts adapters, object types, alert names, criticality and impact. As opposed to the first script, this script provides the list of alerts for each adapter and object type, which is helpful to identify potential alerts that can be triggered in vROps.

Feel free to download these scripts from GitHub and play with them or adapt them according to your needs.

Helpful Links

Dell Repository Manager: Bootable ISO Issues

May 23, 2016

problem_solutionIn one of my previous posts I described the process of upgrading a Dell FX2 chassis firmware using Dell Repository Manager (DRM).

In an ideal world you just follow the process and in an hour or two you can get your chassis upgraded. You may sometimes run into issues. I want to go through some of them in this post, including possible remediation.

Issue Description

When exporting firmware to a bootable ISO you can find DRM not being able to download some of the bundle components with the following error in the Job Result:

Processing failed:
Failed downloading files:
Diagnostics_Application_PWMC8_LN64_OSC_1.1_A00.BIN

And errors in the Log:


60. 24/03/2016 5:58:50 PM Export to Bootable ISO : Downloaded 34 / 56
61. 24/03/2016 5:59:44 PM Export to Bootable ISO : Error downloading some files
62. 24/03/2016 5:59:45 PM Export to Bootable ISO : Failed exporting to Bootable ISO.

Workaround #1: Skip the Component

You can try the following option “Continue download irrespective of any error (in the selected components)” in the export dialog. It won’t help to get the component downloaded, but you will got a bootable ISO.

However, DRM will still keep the failed component in the bundle and try to install it during the upgrade, which will obviously fail (update 16/56):

failed_update

Once the upgrade is finished you will get the following error at the end:

Note: Some update requires machine reboot. Please reboot to CD/DVD to continue for the failed update because of the dependency…

upgrade_status

No matter how many times you reboot you will obviously get the same errors. You can ignore it if you 100% sure this is what causes the upgrade to fail or use Workaround #2.

Workaround #2: Create Custom ISO

When you create a repository in DRM it’s populated with pre-built components and bundles. But you can create custom repositories. The idea is that you can exclude the failed component from the repository by creating it manually.

Assuming you already have the base repository configured, do the following:

  • Open the existing repository and click on the Components tab
  • Deselect the failed component in the component list (in my case it was Diagnostics_Application_PWMC8_LN64_OSC_1.1_A00.BIN)
  • Click on the “Copy To” button:

custom_components

  • In the opened dialogue select “Create NEW Repository and copy component(s) into it”
  • Follow the wizard and when you click finish, components will be copied to the newly created repository
  • Open the new repository and click on the Componenets tab
  • Select all components and click on the “Copy to” button once again
  • This time select “Create a NEW Bundle in the same repository and add component(s) into it”
  • On the next screen give the bundle a name and make sure to choose “Linux 32-bit and 64-bit” in the OS Type

custom_bundle

As a result you should get a new bundle created which you can export to a bootable ISO using the same process.

Workaround #3: Use Server Update Utility

If none of the above helps you can fall back to a proven upgrade approach and use Server Update Utility (SUU). SUU is a huge 12GB ISO to download, but you can use Dell Download Manager, which supports resuming interrupted downloads. Make sure to disable proxy! Dell Download Manager does not support resuming an interrupted download if you’re using a proxy server.

SUU is not a bootable ISO. Previously you had to use Dell Systems Build and Update Utility (SBUU) to boot from it first and then mount the ISO to proceed with the upgrade. Starting with Dell 11G servers you don’t need it anymore and can upgrade firmware straight form Dell Lifecycle Controller (LC).

You’ll need to boot into the Lifecycle Controller and choose Firmware Update > Launch Firmware Update > Local Drive(CD or DVD or USB). Mount the SUU ISO and the rest is fairly straightforward. LC will upgrade the firmware and reboot the blade.

lc_upgrade

Conclusion

Dell Repository Manager is the recommended approach to upgrade firmware on Dell hardware. Unlike SUU, DRM downloads the latest updates and only the necessary components. It is also capable of making a bootable ISO.

If you have issues, rely on Server Update Utility as it’s bulletproof and always work. But be prepared to download a 12GB ISO image and make sure you have an option to bypass proxy.

Exporting Performance Data from NetApp DataFabric Manager

May 30, 2013

OnCommand_Unifiedmanager_LowResQuick post on how to export custom data from DataFabric Manager Performance Advisor.

NetApp Management Console gives a convenient access to the Performance Advisor data and graphs for a comprehensive analysis of NetApp performance. But NMC only shows graphs and doesn’t give access to the exact numbers. But there is a way to export them for further analysis from the dfm cli:

> dfm perf data retrieve -o filer_name -C disk:disk_busy -b “2013-05-23 12:00:00” -e “2013-05-23 17:00:00” -s 3600 -x TimeIndexed > C:\Temp\dfm_export.txt

Default sample rate for the performance data is 15 minutes. It means that you will get 20 lines of data for a 5 hour period. You can specify the data sample rate in seconds by using ‘-s’ key. Particular performance counter is specified by ‘-C’ key. To list all the available counters run:

> dfm perf export counter list

Data is exported in a list format, if you want it to look more like a spreadsheet, use ‘-x TimeIndexed’. And that’s all for now.

Export share in ROCKS

March 14, 2012

In my previous post I described how you can present an iSCSI LUN to a Linux host. I moved all home directories to this NAS share, but later I came to the conclusion that making separate share would be better. Users should have ability to quickly compile applications in their home directories. If home directories are also used as target storage for computational data, then during computation, iSCSI network link can become a bottleneck and slow down everything. That’s why I decided to separate them. It requires exporting additional share and it can be done very easily in ROCKS.

1. Mount the LUN to say /export/scratch

2. Make export by adding (all in one line) to /etc/exports

/export/scratch 192.168.111.128(rw,async,no_root_squash) 192.168.111.0/255.255.255.0(rw,async)

3. Restart nfs

/etc/rc.d/init.d/nfs restart

4. Add line to /etc/auto.share

scratch master.local:/export/&

5. Update 411 config

make -C /var/411

Now share is accessible by all compute nodes from /share/scratch.

Same process is described in ROCKS FAQ here.

Present NetApp iSCSI LUN to Linux host

March 7, 2012

Consider the following scenario (which is in fact a real case). You have a High Performance Computing (HPC) cluster where users usually generate hellova research data. Local hard drives on a frontend node are almost always insufficient. There are two options. First is presenting a NFS share both to frontend and all compute nodes. Since usually compute nodes  connect only to private network for communication with the frontend and don’t have public ip addresses it means a lot of reconfiguration. Not to mention possible security implications.

The simpler solution here is to use iSCSI.  Unlike NFS, which requires direct communication, with iSCSI you can mount a LUN to the frontend and then compute nodes will work with it as ordinary NFS share through the private network. This implies configuration of iSCSI LUN on a NetApp filer and bringing up iSCSI initiator in Linux.

iSCSI configuration consists of several steps. First of all you need to create FlexVol volume where you LUN will reside and then create a LUN inside of it. Second step is creation of initiator group which will enable connectivity between NetApp and a particular host.  And as a last step you will need to map the LUN to the initiator group. It will let the Linux host to see this LUN. In case you disabled iSCSI, don’t forget to enable it on a required interface.

vol create scratch aggrname 1024g
lun create -s 1024g -t linux /vol/scratch/lun0
igroup create -i -t linux hpc
igroup add hpc linux_host_iqn
lun map /vol/scratch/lun0 hpc
iscsi interface enable if_name

Linux host configuration is simple. Install iscsi-initiator-utils packet and add it to init on startup. iSCSI IQN which OS uses for connection to iSCSI targets is read from /etc/iscsi/initiatorname.iscsi upon startup. After iSCSI initiator is up and running you need to initiate discovery process, and if everything goes fine you will see a new hard drive in the system (I had to reboot). Then you just create a partition, make a file system and mount it.

iscsiadm -m discovery -t sendtargets -p nas_ip
fdisk /dev/sdc
mke2fs -j /dev/sdc1
mount /dev/sdc1 /state/partition1/home

I use it for the home directories in ROCKS cluster suite. ROCKS automatically export /home through NFS to compute nodes, which in their turn mount it via autofs. If you intend to use this volume for other purposes, then you will need to configure you custom NFS export.