Posts Tagged ‘API’

Using VMware SDK Support

December 5, 2019

I predict this post will get a single hit over its lifetime, but if it helps at least one person desperately trying to find out how to open a VMware SDK support request, that’s good enough for me.

Quick Overview

Not everyone knows, but VMware, along with support for vSphere, NSX and all the other software products, also provides SDK and API support. If you are a partner developing a solution that integrates with a VMware product or even a customer writing your own vSphere plug-in using vSphere Management SDK, you can reach out to VMware for help.

It’s a paid service. You can find detailed description of it on its landing page here: VMware SDK and API Support

How to open SRs

One thing that is not very obvious about the SDK support is how to open support requests if you’re a customer. The goal of this short post is to demonstrate where to find it on VMware support portal:

  1. Log in to My VMware portal using your account credentials
  2. Under the Support section click Get Support
  3. On the opened page, under “Technical” category, choose your issue type, such as “Fault/Crash”
  4. In the provided list of Supported Products expand SDK Support Services
  5. Select VMware SDK Support
  6. Click Continue and proceed with describing your issue and opening the ticket, as usual

This is a screenshot of what it will look like, if your account has been entitled to SDK support:

If you’re working with SpringSource, there is also a range of support option under the SprinSource Open Source Support sub-category.

Conclusion

I’ve had only brief interaction with SDK Support team, but can only say good things about them. One of the examples was a question I had on parameter specifications of a particular vSphere Web Management SDK function and I not only got an answer to my question, but I was also provided with code snippets, which I didn’t even ask for. So if you are serious about using VMware SDKs and think you may require technical support, I can certainly recommend this service.

Troubleshooting vSphere Guest Operations API

October 4, 2019

What is vSphere Guest Operations

Recently I’ve been heavily utilizing vSphere Guest Operations API for automating vCenter patching. vSphere Guest Operations (GuestOps) is an API, which allows you to run commands on a virtual machine without needing to connect to it over the network. All you need is credentials to the vCenter managing the virtual machine and to the virtual machine itself.

GuestOps can be called by using an Invoke-VMScript PowerCLI cmdlet in the following format:

> Invoke-VMScript -ScriptText “uname -a” -vm vc01 -GuestUser root -GuestPassword VMware1!

Cmdlet will talk to the vCenter, vCenter will talk to ESXi host, ESXi host will talk to VMware Tools and, eventually, VMware Tools will run the command on the Guest OS.

It worked well for me when I was running commands on VCSA 6.0 VM (managed by another vCenter), but after patching and upgrading this VM to VCSA 6.7 I encountered the following error:

Error occured while executing script on guest OS in VM ‘vc01’. Could not locate “Powershell” script interpreter in any of the expected locations. Probably you do not have enough permissions to execute command within guest.

It’s obvious from the error message that cmdlet is doing something wrong, since it’s supposed to use bash in Linux, not PowerShell.

Enable Debugging in VMware Tools

To better understand what was going on, I logged in to VCSA via SSH and enabled VMware Tools debugging (see KB1007873 for instructions on how to do that) and restarted Open VM Tools:

# systemctl restart vmtoolsd.service

After running the Invoke-VMScript cmdlet again, this is what I noticed in vmsvc.log debug log:

[vix] VixTools_StartProgram: User: root args: progamPath: ‘cmd.exe’, arguments: ‘/C powershell -NonInteractive -EncodedCommand cABvAHcAZQByAHMAaABl…

So it wasn’t just a misleading PowerCLI error message, Invoke-VMScript was actually trying to call a PowerShell command using Windows command interpreter on a Linux VM.

Solution

My guess is that since VMware has changed underlying operating system on VCSA from SUSE Linux to Photon OS, Invoke-VMScript can no longer properly identify the underlying OS and defaults to Windows.

Simple solution to this problem is to give a helping hand to Invoke-VMScript cmdlet and specify interpreter using -ScriptType Bash parameter. This is what a proper resulting debug log message will look like:

[vix] VixToolsStartProgramImpl: started ‘”/bin/bash” -c “bash > /tmp/vmware-root/powerclivmware159 2>&1 -c \”uname -a\””‘, pid 7456

Run CLI Commands on NSX Manager Using REST API

August 29, 2019

Over the last few years I’ve had a chance to work with NSX-V REST APIs in many different shapes and forms. Directly from vRealize Orchestrator and PowerShell/PowerNSX, indirectly from vRealize Automation or simply by making calls from Postman, which is sometimes required during NSX deployment and upgrades.

To date I haven’t been able to find any gaps in the API and can say only good things about it. It is very well documented. You can find detailed descriptions of all requests in NSX API Guide PDF or interactively browse it in API explorer on https://code.vmware.com.

But at the end of the day, NSX REST API is only a subset of what you can do from CLI and there are situations where it’s not sufficient. I’ll give you an example. Let’s say you want to know how much storage is available on NSX Manager appliance log partition. There’s a REST API call, which will give you a response similar to this:

GET https://nsxm/api/1.0/appliance-management/system/storageinfo

<storageInfo>
  <totalStorage>86G</totalStorage>
  <usedStorage>22G</usedStorage>
  <freeStorage>64G</freeStorage>
  <usedPercentage>25</usedPercentage>
</storageInfo>

As you can see, it answers the question of how much total space is available on the appliance, but doesn’t provide a full per partition breakdown available from the CLI via “show filesystem”:

Filesystem      Size  Used Avail Use% Mounted on
/dev/root       5.6G  1.2G  4.1G  23% /
tmpfs           7.9G  232K  7.9G   1% /run
devtmpfs        7.9G     0  7.9G   0% /dev
/dev/sda6        44G   19G   24G  44% /common
/dev/loop0       16G   45M   15G   1% /common/vdisk_mnt

So what are the options here? What is not widely known is that you can use NSX central command-line interface to remotely invoke appliance CLI commands using REST API.

Invoking CLI Commands

NSX REST API has a handy POST call https://nsxm/api/1.0/nsx/cli?action=execute. All you need to provide in addition to Authorization credentials using “Basic Auth” option is the following body in XML format:

<nsxcli>
  <command>show filesystem</command>
</nsxcli>

In response you will get a body in “text/plain” format, which is the only drawback of this method. You will need to parse the response in your scripting language of choice. In PowerShell, if you made the original call using Invoke-WebRequest cmdlet and saved it into the $response variable, it can look something like this:

$responseRows = $response.Content -split "`n"
foreach($row in $responseRows) {
  if($row -Like "*/dev/sda6*") {
    $pctUsed = $row.Split(" ",[StringSplitOptions]"RemoveEmptyEntries")[4]
    $pctUsedValue = $pctUsed.Substring(0, $pctUsed.Length-1)
    Write-Host "Space utilization on the log partition is $pctUsed."
    break
  }
}

Conclusion

For most use cases NSX REST API provides all the necessary information about NSX component configuration in structured JSON or XML format. This method is more of an exception rather than a rule, but it’s a nice tool to have in your tool belt, when you run out of options.

vSphere 6.0 REST API: A History Lesson

August 23, 2019

I’m glad to see how VMware products are becoming more and more automation-focused these days. NSX has always had rich REST API capabilities, which I can’t complain about. And vSphere is now starting to catch up. vSphere 6.5 was the first release where REST API started getting much more attention. See these official blog posts for example:

But not many people know that vSphere 6.5 wasn’t the first release where REST API became available. Check this forum thread on VMTN “Does vCenter 6.0 support RESTFUL api?”:

I think its only supported for 6.5 as below blogs has a customer asked the same question and reply is no..

It’s not entirely true, even though I know why the OP got a “No” answer. Let me explain.

vSphere 6.0 REST API

VMware started to make first steps towards REST API starting from 6.0 release. If you have a legacy vSphere 6.0 environment you can play with, you can easily test that by opening the following URL:

https://vcenter/rest/com/vmware/vapi/metadata/cli/command

You will get a long list of commands available in 6.0 release:

It may look impressive, but if you look closely you will quickly notice that they are all Content Library or Tagging related. Quote from the referenced blog post:

VMware vCenter has received some new extensions to its REST based API. In vSphere 6.0, this API set provides the ability to manage the Content Library and Tagging but now also includes the ability to manage and configure the vCenter Server Appliance (VCSA) based functionality and basic VM management.

That is right, in vSphere 6.0 REST API is very limited, you won’t get inventory data, backup or update API. All you can do is manage Content Library and Tagging, which, frankly, is not very practical.

Making REST API Calls

If Content Library and Tagging use cases are applicable to you or you are just feeling adventurous this is an example of how you can make a call to vSphere 6.0 REST API via Postman.

All calls are POST-based and action (get, list, create, etc.) is specified as a parameter, so pay close attention to request format.

First you will need to generate authentication token by making a POST call to https://vcenter/rest/com/vmware/cis/session, using “Basic Auth” for Authorization and you will get a token in response:

Then change Authorization to “No auth” and specify the token in “vmware-api-session-id” header in your next call. In this example I’m getting a list of all content libraries (you will obviously get an empty response if you haven’t actually created one):

Some commands require a body, to determine the body format use the following POST call to https://vcenter/rest/com/vmware/vapi/metadata/cli/command?~action=get, with the following body in JSON format:

{
	"identity": {
        "name": "get",
        "path": "com.vmware.content.library"
	}
}

Where “path” is the operation and “name” is the action from the https://vcenter/rest/com/vmware/vapi/metadata/cli/command call above.

If you’re looking for more detailed information, I found this blog post by Mitch Tulloch very useful:

Conclusion

There you have it. vSphere 6.0 does support REST API, it’s just not very useful, that’s why no one talks about it.

This blog post won’t help you if you are stuck in a stone age and need to manage vSphere 6.0 via REST API, but it at least gives you a definitive answer of whether REST API is supported in vSphere 6.0 and what you can do with it.

If you do find yourself in such situation, I recommend to fall back on PowerCLI, if possible.

NSX Optimistic Locking and PowerNSX

August 3, 2019

Recently, when working on some NSX-V automation, I came across an interesting issue, which I want to discuss here, since there’s almost no information on the Internet (while I’m writing this), that would help to solve it or even point you in the right direction. It has to do with PowerNSX and Optimistic Locking in NSX (which technically is not even a locking mechanism), but let’s start from the beginning.

If you ever used PowerNSX module to automate NSX via PowerShell you noticed that most of the code examples use pipelines to run PowerNSX cmdlets. So instead of using variables, like so:

$Edge = Get-NsxEdge vRA7 _ edge
$LoadBalancer = Get-NsxLoadBalancer -Edge $Edge
Set-NsxLoadBalancer -LoadBalancer $LoadBalancer -enabled
New-NsxLoadBalancerApplicationProfile -LoadBalancer $LoadBalancer -Name $WebAppProfileName -Type $VipProtocol –SslPassthrough

all commands are run this way instead:

Get-NsxEdge vRA7_edge | Get-NsxLoadBalancer | Set-NsxLoadBalancer -enabled
Get-NsxEdge vRA7_dge | Get-NsxLoadBalancer | New-NsxLoadBalancerApplicationProfile -Name $WebAppProfileName -Type $VipProtocol –SslPassthrough

What’s the difference you may ask, besides the fact the the second variant is slower, because you retrieve edge and load balancer objects multiple times, instead of once, compared to the first variant? There’s actually a strong reason for it. More specifically, it is the following error that you gonna get if you don’t use pipelines:

invoke-nsxwebrequest : Invoke-NsxWebRequest : The NSX API response received indicates a failure. 409 : Conflict : Response Body: {“errorCode”:101, “details”:”Concurrent object access error. Refresh UI or fetch the latest copy of the object and retry the operation.”, “rootCauseString”:null, “moduleName”:null, “errorData”:null}

See, NSX uses Optimistic Locking (yes, there’s Pessimistic Locking as well) to handle concurrency. Its purpose is to make sure that if you’re making a change to an object in NSX you are aware of its current state. In the above example, you saved load balancer into a variable, changed the load balancer state to enabled and then tried to create an application profile, supplying load balancer saved in a variable as a parameter to the cmdlet. But the load balancer (and edge) state has changed and you’re basically using an old (stale) version of the object. You either have to retrieve the current state of the object again or avoid this issue all together by simply using pipelines and retrieve the up-to-date version of the object with every call.

Read this article if you want to know more about Optimistic Locking:

If you found this useful, please leave a comment, smash that like button and hit notification bell to not miss new blog posts ever again.

Quick Start With Lifecycle Manager REST APIs

December 11, 2018

Just a few years ago coming across an infrastructure product (software or hardware) that supports REST APIs was a rare thing. Today it’s the opposite. Buying, say, a storage array from a major vendor, that doesn’t support some sort of an API can be seen as a potential drawback. It’s now gotten to a point where certain operations can only be done via API and are not available in the GUI. So basic programming skills become more and more important.

I have come across such situation with vRealize Suite Lifecycle Manager (vRSLCM or just LCM) product from VMware. If you have a request that got stuck, the only way to cancel it (at least at the time of writing) is to use LCM’s REST APIs. It can’t be done from the GUI.

While I was tackling this issue, I noticed that there aren’t many articles on how to make REST calls to LCM on the Internet, so I though I’d use this opportunity to show how to do it.

Authentication

First challenge you have to deal with is authentication. LCM doesn’t support basic authentication, like other products, for instance NSX. You need a token.

This is how you can get a token in Postman:

{
	"username":"admin@localhost",
	"password":"vmware"
}

This is what it will look like in Postman:

When you click send you should get a token in response:

Making REST Calls

Now you need to specify the token as one of the headers, with “x-xenon-auth-token” as key and the token itself as value:

From here, you are ready to make actual REST API calls. Coming back to our example, we can go to LCM GUI and copy the ID of the stuck request from the browser window:

And then make a DELETE call with empty body to cancel the request:

As a result, traces of the request will be completely deleted from LCM.

Note: The only catch here, that you have to remove “v1” version of the API from the URL. Or it will not work.

Swagger UI

LCM supports Swagger, which lets you run REST API calls straight from the browser. So if you want to feel yourself a hacker, open the https://lcm-hostname/api URL and you can get the token and make requests by simply using the “Try It Out” button, specifying required parameters and hitting “Execute”.

Creating vRealize Operations Manager Alerts Using REST API

September 11, 2018

Whenever I’m faced with a repetitive configuration task, I search for ways to automate it. There’s nothing more boring than sitting and clicking through the GUI for hours performing the same thing over and over again.

These days most of the products I work with support REST API interface, so scripting has become my solution of choice. But scripting requires you to know a scripting language, such as PowerShell, certain SDKs and APIs, like PowerCLI and REST and more importantly – time to write the script and test it. If you’re going to use this script regularly, in the long-term it’s worth the effort . But what if it’s a one-off task? You may well end up spending more time writing a script, than it takes to perform the task manually. In this case there are more practical ways to improve your efficiency. One of such ways is to use developer tools like Postman.

The idea is that you can write a REST request that applies a certain configuration setting and use it as a template to make multiple calls by slightly tweaking the parameters. You would have to change the parameters manually for each request, which is not as elegant as providing an array of variables to a script, but still much quicker than clicking through the GUI.

Recently I worked on a VMware Validated Design (VVD) deployment for a customer, which required configuring dozens of vRealize Operations Manager alerts as part of the build. So I will use it as an example to demonstrate how you can save yourself hours by doing it in Postman, instead of GUI.

Collect Alert Properties

To create an alert in vROps you will need to specify certain alert properties in the REST API call body. You will need at least:

  • “pluginId” – ID of the outbound plugin, which is usually the Standard Email Plugin
  • “emailaddr” – recipient email address
  • “values” property under the alertDefinitionIdFilters XML element – this is the alert definition ID
  • “resourceKind” – resource that the alert is applicable for, such as VirtualMachine, Datastore, etc.
  • “adapterKind” – this is the adapter that the alert comes from, such VMWARE, NSX, etc.

To determine the pluginId you will need to configure an outbound plugin in vROps and then make the following GET call to get the ID:

To find values for alert definition, resource kind and adapter kind properties, make the following get call and search for the alert name in the results:

Create Alert in vROps

To create an alert in vROps, you will need to make a POST call to the following URI in XML format:

  • Use the following request URL: https://vrops-hostname/suite-api/api/notifications/rules
  • Click on Headers tab and specify the following key “Content-Type” and value “application/xml”
  • Click on Body tab and choose raw, in the drop-down choose “XML (application/xml)”
  • Copy the following XML request to the body and use it as a template
<ops:notification-rule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:ops="http://webservice.vmware.com/vRealizeOpsMgr/1.0/">
<ops:name>
No data received for Windows platform
</ops:name>
<ops:pluginId>c5f60db9-eb5b-47c1-8545-8ba573c7d289</ops:pluginId>
<ops:alertControlStates/>
<ops:alertStatuses/>
<ops:criticalities/>
<ops:resourceKindFilter>
<ops:resourceKind>Windows</ops:resourceKind>
<ops:adapterKind>EP Ops Adapter</ops:adapterKind>
</ops:resourceKindFilter>
<ops:alertDefinitionIdFilters>
<ops:values>AlertDefinition-EP Ops
Adapter-Alert-system-availability-Windows</ops:values>
</ops:alertDefinitionIdFilters>
<ops:properties name="emailaddr">vrops@corp.local</ops:properties>
</ops:notification-rule>

As described before, make sure to replace the following properties with your own values: “pluginId”, “values” property under the alertDefinitionIdFilters XML element, resourceKind, adapterKind and emailaddr.

As a result of the REST API call you will get an alert created in vROps:

For every other alert you can keep the plugin ID and email address the same and update only the alert definition, resouce kind and adapter kind.

Conclusion

By using the same REST call and changing properties for each alert accordingly, I was able to finish the job much quicker and avoided hours of pain of clicking through the GUI. As long as you have a REST API endpoint to work with, the same approach can be applied to any repetitive task.

If you’d like to learn more, make sure to check out VMware {code} project here for more information about VMware product APIs and SDKs.

Extracting vRealize Operations Data Using REST API

September 17, 2017

Scripting today is an important skill if you’re a part of IT operations team. It is common to use PowerShell or any other scripting language of your choice to automate repetitive tasks and be efficient in what you do. Another use case for scripting and automation, which is often missed, is the fact that they let you do more. Public APIs offered by many software and hardware solutions let you manipulate their data and call functions in the way you need, without being bound by the workflows provided in GUI.

Recently I was asked to extract data from vRealize Operations Manager that was not available in GUI or a report in the format I needed. At first it looked like a non-trivial task as it required scripting and using REST APIs to pull the data. But after some research it turned out to be much easier than I thought.

Using Python this can be done in a few lines of code using existing Python libraries that do most of the work for you. The goal of this blog post is to show that scripting does not have to be hard and using the right tools for the right job you can get things done in a matter of minutes, not hours or days.

Scenario

To demonstrate an example of using vRealize Operations Manager REST APIs we will retrieve the list of vROps adapters, which vROps uses to pull information from many hardware and software solutions it supports, such as Nimble Storage or Microsoft SQL Server.

vROps APIs are obviously much more powerful than that and you can use the same approach to pull other information such as: active and inactive alerts, performance statistics, recommendations. Full vROps API documentation can be found at https://your-vrops-hostname/suite-api/.

Install Python and Libraries

We will be using two Python libraries: “Requests” to make REST calls and “ElementTree” for XML parsing. ElementTree comes with Python, so we will need to install the Requests package only.

I already made a post here on how to install Python interpreter and Python libraries, so we will dive right into vROps APIs.

Retrieve the List of vROps Adapters

To get the list of all installed vROps adapters we need to make a GET REST call using the “get” method from Requests library:

import requests
from requests.auth import HTTPBasicAuth

akUrl = 'https://vrops/suite-api/api/adapterkinds'
ak = requests.get(akUrl, auth=HTTPBasicAuth('user', 'pass'))

In this code snippet using the “import” command we specify that we are using Requests library, as well as its implementation of basic HTTP authentication. Then we request the list of vROps adapters using the “get” method from Request library, and save the XML response into the “ak” variable. Add “verify=False” to the list of the get call parameters if you struggle with SSL certificate issues.

As a result you will get the full list of vROps adapters in the format similar to the following. So how do we navigate that? Using ElementTree XML library.

Parsing XML Response Sequentially

vRealize Operations Manager returns REST API responses in XML format. ElementTree lets you parse these XML responses to find the data you need, which you can output in a human-readable format, such as CSV and then import into an Excel spreadsheet.

Parsing XML tree requires traversing from top to bottom. You start from the root element:

import xml.etree.ElementTree as ET

akRoot = ET.fromstring(ak.content)

Then you can continue by iterating through child elements using nested loops:

for adapter in akRoot:
  print adapter.tag, adapter.attrib['key']
    for adapterProperty in adapter:
      print adapterProperty.name, adapterProperty.text

Childs of <ops:adapter-kinds> are <ops:adapter-kind> elements. Childs of <ops:adapter-kind> elements are <ops:name>, <ops:adapterKindType>, <ops:describeVersion> and <ops:resourceKinds>. So the output of the above code will be:

adapter-kind CITRIXNETSCALER_ADAPTER
name Citrix NetScaler Adapter
adapterKindType GENERAL
describeVersion 1
resourceKinds citrix_netscaler_adapter_instance
resourceKinds appliance
…

As you could’ve already noticed, all XML elements have tags and can additionally have attributes and associated text. From above example:

  • Tags: adapter-kind, name, adapterKindType
  • Attribute: key
  • Text: Citrix NetScaler Adapter, GENERAL, 1

Finding Interesting Elements

Typically you are looking for specific information and don’t need to traverse the whole XML tree. So instead of walking through the tree sequentially, you can iterate trough interesting elements using the “iterfind” method. For instance if we are looking only for adapter names, the code would look as the following:

ns = {'vrops': 'http://webservice.vmware.com/vRealizeOpsMgr/1.0/'}
for akItem in akRoot.iterfind('vrops:adapter-kind', ns):
  akNameItem = akItem.find('vrops:name', ns)
  print akNameItem.text

All elements in REST API responses are usually prefixed with a namespace. To avoid using the long XML element names, such as http://webservice.vmware.com/vRealizeOpsMgr/1.0/adapter-kind, ElementTree methods support using namespaces, that can be then passed as a variable, as the “ns” variable in this code snippet.

Resulting output will be similar to:

Citrix NetScaler Adapter
Container
Dell EMC PowerEdge
Dell Storage Adapter
EP Ops Adapter
F5 BIG-IP Adapter
HP Servers Adapter

Additional Information

I intentionally tried to keep this post short to give you all information required to start using Python to parse REST API responses in XML format.

I have written two scripts that are more practical and shared them on my GitHub page here:

  • vrops_object_types_1.0.py – extracts adapters, object types and number of objects. Script gives you an idea of what is actually being monitored in vROps, by providing the number of objects you have in your vROps instance for each adapter and object type.
  • vrops_alert_definitions_1.0.py – extracts adapters, object types, alert names, criticality and impact. As opposed to the first script, this script provides the list of alerts for each adapter and object type, which is helpful to identify potential alerts that can be triggered in vROps.

Feel free to download these scripts from GitHub and play with them or adapt them according to your needs.

Helpful Links

Python for Windows: Quick Installation

September 7, 2017

Only recently I added Python to the list of tools I use in my job. I had always used PowerShell if I needed to script something, until I saw how easy Python is to use. I will be keeping it in my arsenal from now on.

In near future I plan to make a blog post on how to use Python with REST APIs and in this blog post I wanted to provide quick instructions on how to install Python in Windows, that I can later use as a reference.

Installing Python

Latest version of Python for Windows can be downloaded from https://www.python.org/downloads/windows/. Executable installer is probably the easiest. When installing, make sure to check “Add Python 3.6 to PATH” option, it makes life much easier.

Installing Libraries

Python installation already includes lots of libraries that you can use for scripting. ElementTree library, for example, which is used for XML parsing, comes with the interpreter.

Depending on what you want to use Python for, you may need to install additional libraries. For instance, if you want to call REST APIs, you may need Requests – library for HTTP cals.

Python uses a package manager called “pip”. If it is not already in your PATH variable, find pip under Python installation directory and run from command line as administrator:

pip install requests

Once the library is installed you can use it by importing it into your scripts:

import requests

Writing Code

At this point you can call Python interpreter in Windows command line and start running Python commands. If you want to write a script, however, you will need an IDE. Nothing wrong in using Notepad, but there are more efficient ways to do that.

Python for Windows comes with IDE, which is called simply IDLE. It is very basic, but it provides all essential features, such as as code completion, syntax highlighting and a primitive debugger. It is not perfect but it has everything to get you started.

Conclusion

That is a quick crash course with three simple steps to get Python up and running. I tried to keep it short to demonstrate how you can start using Python with minimum effort.

Puppet Camp 2016 Recap

December 4, 2016

puppet-campLast week I had a chance to attend Puppet Camp 2016. Puppet Camp is a one day event that is held once a year in many places around the world including Australia. This time it was the fourth Melbourne conference, which gathered 240 attendees and several key partners, such as NetApp, Diaxion and Katana1.

In this blog post I want to give a quick overview of the keynote, customer and partner sessions, as well as my key takeaways from the conference.

First Impressions

I’ve never been to Puppet Camp before and this was my first experience. Sheer number of participants clearly shows that areas of configuration management and DevOps in general attracts a lot of attention of both customers and channel.

imag5067_2

You may have heard how Cisco in Q3 of 2015 announced Puppet support for the Nexus 3000 and 9000 series switches. This was not just an accident. I had a chance to speak to NetApp, who was one of the vendors presented at the conference, and they now have Puppet integration with their Data ONTAP / FAS platform, as well as E-Series and recently acquired SolidFire line of storage arrays. I’m sure many other hardware vendors will follow.

Keynote and Puppet Update

The conference had one track of sessions spread out throughout the day and was opened by a keynote from Robert Finn, APJ Sales Director at Puppet, who was talking about the raising complexity of modern IT environments and challenges that come with it. We have gone from tens of servers to hundreds of VMs and are now on the verge of the next evolution from hundreds of VMs to thousands of containers. We can no longer manage environments manually and that is where tools such as Puppet come into play and let us manage configuration and provisioning at scale.

Rob also mentioned the “State of DevOps Report” an annual survey Puppet has been running now for five years in a row. In 2016 they collected responses from 4600 technical professionals and shared a lot of their findings in a public report, which I’ll link in the references section below.

state_of_devops

Key takeaways: introducing configuration management in their software development practices organizations were able to achieve 3x lower change failure rate and 24x faster recovery from failures.

Ronny Sabapathee, Puppet Solutions Engineer gave an overview of the new features in the latest Puppet Enterprise 2016.4, such as corrective change reporting, changes to Puppet Orchestrator, enhancements in Code Manager and API improvements.

Key takeaways: Puppet ecosystem is growing quickly with Docker module, Jenkins plugin, significant enhancements in Azure module and VMware vRealize Automation/Orchestrator integration coming soon.

Customer Sessions

Rob Kenefik from SpecSavers spoke about their journey of scaling free version of Puppet from 10 to 290 nodes, what issues they came across and what adjustments they had to make, especially around the DB back-end.

Key takeaways: don’t use embedded Puppet database for production deployments. PostgreSQL (which is now default) provides required scalability.

Steve Curtis from ANZ briefly discussed how they automated deployment of Application Performance Monitoring (APM) agents using Puppet. Steve also has a post in Puppet blog, which I’ll link below.

Chris Harwood from Healthdirect Australia touched on a sensitive topic of organizational silos and how teams become too focused on their own performance forgetting about the customers, who should be the key priority for businesses offering customer-facing services.

Then he showed how Healthdirect moved some of the ops people to development teams giving devs access to infrastructure and making them autonomous, which significantly improved their development workflows and release frequency.

Key takeaways: DevOps key challenges are around people and processes, not technology. Teams not collaborating and lengthy infrastructure change management processes can significantly hinder development teams’ performance.

Partner Sessions

Dinesh Siriwardhane who represented Versent compared pros and cons of master/agent vs. masterless Puppet deployments and showed a demo on Puppet certificate management.

Key takeaways: Puppet master simplifies centralized management, provides reporting capabilities, but can be a single point of failure. Agentless deployment using GitHub has no single points of failure and is free, but can have major security repercussions if Git repository is compromised.

imag5085_1

Kieran Sweet and Pedram Sanayei from Sourced made a presentation on Puppet integration with Azure and how using Puppet instead of just the low-level Azure APIs and PowerShell, can significantly simplify deployment and configuration management in the Microsoft cloud.

Key takeaways: Azure Resource Manager is a big step forward from the old Azure Service Management (classic deployment model). In light of the significant recent enhancements in the Azure Puppet module, this can become a reasonable alternative to AWS.

Scott Coulton from Autopilot closed the conference with a session on Puppet integration with Docker and more specifically around container orchestration tools, such as Docker Swarm, Kubernetes, Mesos and Flocker. Be sure to check Scott’s blog and GitHub repository where you can find a Puppet module for Docker Swarm, Vagrant template and more.

Key takeaways: Docker can be used to deploy containers, but Puppet is still essential to keep configuration across the hosts consistent.

Conclusion

I spoke to a lot of customers at the conference and what became apparent to me was that Puppet is not just another DevOps tool amongst the many. It has a wide ecosystem of partners and has gone a long way since they started as a small start-up 12 years ago in 2005.

It has a strong use case for general configuration management in Linux environments, as well as providing application configuration consistency as part of CI/CD pipelines.

Speaking of the conference itself I was pleasantly surprised by the quality of sessions and organization in general. Puppet Camp will definitely stay on my radar. I’d love to come back next year and geek out with the DevOps crowd again.

References