Automating the World Around Me

April 24, 2017
by zach

vRA 7 Server Deployment Fails After VM is Deployed From Template

Recently, I purchased enough equipment to complete a homelab environment. Everything went well until the last step of deploying a new VM through vRA 7.2. I asked a couple colleagues what they thought and they hadn’t seen it before. I searched VMTN and google didn’t find the exact cause of the issue so I decided to get this out there just in case someone else ran into it.


To set the stage, I have a small deployment of vRA 7.2 running in a nested environment. My first catalog item is a Windows Server 2012 R2 VM. The template is prepped and a customization specification ready to be applied. Using just vCenter, I could deploy a VM from the template and use the customization specification to customize the guest successfully. However, when I attempted this process through vRA, I received the following error right after the clone completed.

The following component requests failed: vSphere_Machine_1. Request failed: Machine “servername”: 
CustomizeVM: Error getting property ‘info’ from managed object CustomizationSpecManager.


I also received the following error in vCenter.

Set virtual machine custom value: A specified parameter was not correct: key

vcenter-cSpec-DeployErrorI tried a few different things to resolve the issue like creating a new customization spec but everything I did always pointed back to vRA trying to initiate the next step after the VM was deployed from the template.


As I searched blogs and VMTN for answers, I discovered the following thread. It isn’t the smoking gun but did get me pointed in the right direction. It describes a permissions issue causing the error Danny saw, which happens to be the same error I experienced. Next, I took a look at the permissions granted to my svc_vRA account. It had full admin privileges, at the data center level. Since this is my homelab, there’s no reason I can’t grant it more access. I granted it admin privileges at the vCenter level. This change allows it access to the customization specifications, which are above the data center access. I kicked off a new deployment and received a successful deployment of a base Windows Server 2012 R2 VM.

Make sure the account you are using within vRA has enough permissions. Then ensure they are granted in the correct location!

April 21, 2017
by zach

Moving My Career AHEAD

As many of you already know, I joined AHEAD a couple months ago. I started as a Senior Technical Architect on February 7th. This is my jump out of the customer space into consulting. Moving My Career AHEADI felt this was the best time in my career and in my personal life to make this move. I had become bored with the day-to-day activities within a customer environment. My last company had plenty of technology to work with but it was just advancements of the same old stuff I had been using for years. Therefore, a change was needed before I was completely burnt out of IT in general.

I had been in contact with AHEAD for some time but the time was not quite right. They reached out to me in January and the ball was in full motion to get me onboard. They wasted zero time getting me engaged with clients as I was on-site with a client in my first two weeks. In the two months that have passed since joining, I have been busy the entire time. Not only learning how the consulting side works, but also learning new methods and new technology has been on my agenda.

I am excited about my future career at AHEAD. Initially when searching for companies I was willing to work for, AHEAD stood out because of the talent on staff. Two months in, the talent at AHEAD has surprised me even more. The best part is everyone is willing to assist me in whatever way. It is definitely a team atmosphere. I’m definitely glad to be here! I’m ready for the challenges ahead!

I will also be paying more attention to this blog. I already have four posts in queue resulting from issues or experiences that I have come across in the past two months.


April 12, 2017
by zach

vRA 7.2 Active Directory Policy Failing to Create New Computer Object

I love the new Active Directory Policy feature within vRealize Automation (vRA) 7.2. It allows easy management of Active Directory (AD) objects, like computer objects when a new VM is provisioned. I like this integration much better than the CCC plugin that was created for vRA 6.x a couple years ago. The flexibility of Active Directory Policies within vRA is highly desirable for most admins. It can also be fairly dynamic when paired with its custom property.

The Issue

Without much work, the Active Directory Policy configuration is quick and simple. However, I encountered a problem when the workflow within vRealize Orchestrator (vRO) could not create a new computer object during an event subscription lifecycle state. The error isn’t very descriptive unfortunately. 

AD Object Creation Failure

With not much to go on, I decided to perform the same operation but with the regular AD workflows within the AD plugin in vRO’s library. I received the same error when using those workflows. Choosing a different OU to deploy to also resulted in an error.

The Solution

I changed the service account I used to a domain admin account and was met with a successful creation of an AD computer object. At that moment, I realized I used a service account that did not have proper rights to the OU I was trying to create/delete computer objects in. It is an easy fix but without much of an error, it can be frustrating to troubleshoot.

Other than this user error, the Active Directory Policy integration works very well and is a must have for environments with Active Directory.


July 13, 2016
by zach

vRA Could Not Create a SSL/TLS Secure Channel


At the end of Monday, I noticed our vRA implementation was not provisioning new servers. A failure of a new machine request was reported two minutes after the submission/approval. I looked through the logs and found an error stating that the request could not create SSL/TLS secure channel. Therefore, I performed my proper engineer duties and hit the interwebs for a solution.

vRA Couldn't Create SSL/TLS Secure Channel

vRA Couldn’t Create SSL/TLS Secure Channel

Solution? Not So Fast.

Great! I found a VMware KB article (2123455) that describes my error in verbatim. Scrolling down to the resolution, I find it is a communication issue between the DEM-Worker servers and vRO. VMware references a specific Microsoft patch (3061518)that would have been installed on the DEM-W servers that needs to be removed. Therefore, I logged onto our DEM-W servers and found the patch was indeed installed on the servers. Unfortunately, I noticed it had been installed on the servers since August 9, 2015, which happened to be the day the servers were stood up initially. I was then not sold on the idea that they had worked for 11 months and then all of the sudden quit working because of this patch.


I opened a case with VMware to look into it. A vRA support log bundle was generated and sent off for review. The support engineer asked me to remove the patch even though it had been working properly for 11 months.  I found I could not directly remove it as it wasn’t shown in the list of Windows updates that could be uninstalled. So I wait for another solution….

The next day I was provided an update showing there was a roll-up update from Microsoft that may be the culprit. This time it was KB3161606. Sure enough, this patch was installed on both DEM-W servers over the weekend. I uninstalled it and rebooted both servers. Success! IaaS server provisioning is now completing without issues. The patch was pushed out in June by Microsoft. Hopefully, VMware gets around to updating their KB article to include KB3161606 alongside KB3061518.

April 15, 2016
by zach

Custom Property is Not Displayed Correctly in vRA 6.2

I just ran into a quirk where a custom property is not displayed correctly in vRA 6.2. I had created a new custom property with a DropDown control type. Added the property to a build profile and attached the build profile to a blueprint. However, when I opened the catalog item to view it, there was no dropdown. I’ve included some screenshots to show an example.

As an example, I created a custom property in my property dictionary named “Test.Dropdown”.Custom Property

I added a few property attributes to be displayed in the dropdown.Custom Property Attributes

Added the custom property to a build profile.
Build Profile
Added the build profile to the desired blueprint.
Then I went to the catalog item related to the blueprint I added the dropdown to and WTF?

 Originally when I was trying to figure out why a text field was displayed instead of a dropdown, I double checked everything. Copied and pasted the name of the custom property to ensure I didn’t type something in wrong or had a trailing space… I then remembered I had a property layout attached to this blueprint. Once I added the property to the layout, the dropdown was displayed as expected.

I guess this is what you call a “feature.” It pulled the name of the custom property from the build profile, displayed the property definition name but went no further.

April 5, 2016
by zach

Busy, but definitely not forgotten

It may seem I forgot this blog was here, but it is always in the back of my mind. Along with non-stop day-to-day requirements and projects at work, the last year of life outside of work has also been busy. Last summer, my wife and I went to Europe for just over two weeks. We did “the beer tour”, Belgium-Germany-Ireland. Definitely a trip of a lifetime and would go back in a heartbeat. However, my wife and I’s master plan has went perfectly as she is due with our first baby in mid-September. With that, house projects on our 100 year old house have been accelerated. The critical things are mostly done. Year after year, new experiences. None bigger than 2016!

Busy with…. Analytics?

Also last fall, I was fortunate to begin working with Dave Bartoo over at to provide statistical analysis for upcoming college football games. I have mentioned this is “my profession meets my obsession.” Throughout the season, it was on and off but then was provided a truckload of data for the entire year for Clemson and Alabama for the National Championship. With the help oAlways busy with new experiencesf Python and Splunk, I was able to come up with some telling deficiencies/strengths on both sides of the ball. Dave attempted to provide the stats to his contacts on both sides two days before the big game. Both sides denied that stats unfortunately. All was not lost though. Our stats were gladly accepted by the TV commentators on the national broadcast. Four of my stats were mentioned on air. Another one of my stats that was not used but the opposite was said by Kirk Herbstreit on-air, proved to be true in favor of Alabama. Normally, my wife and I would have watched the game as we are huge football fans in general but really could have cared less considering the two teams. But since we knew there was a possibility that the stats could be used on-air it was one of the most interesting games to watch considering the statistical analysis that I had performed for the game. Who knows how the upcoming 2016 season will turn out and what stats I can come up with.

I know 2016 will be full of new experiences in and out of work which will no doubt keep me busy. Now to document some of those experiences on here so it is not forgotten.




November 30, 2015
by zach

Commitmas is Almost Here

Last year Matt Brender (@mjbrender) started a little movement called Commitmas. As we approach the end of the year, Commitmas is almost here! At the heart, it is all about learning and sharing with the community. In the past, GitHub was an application developer’s playground. As infrastructure is becoming more and more managed by code, revision control is a must. GitHub or some other revision control system should be at the top of all IT Pro’s list of skills to

Commitmas was only twelve days last year but this year, a couple of the vBrownBag crew (Jonathan Frappier & Rob Nelson) has expanded it tremendously. This year, it is the entire month of December. Community engagement has expanded with the addition of an entire series of vBrownBags (sign up here), a twitter account (@commitmas), and a new Commitmas repository for 2015.

I didn’t join in on Commitmas last year as I didn’t see it until it was almost over. As I started to learn Python in late 2014 into early 2015, I used GitHub to keep track of my progress as well as learn GitHub at the same time. Unfortunately, I haven’t used GitHub much since then except for sharing a few PowerCLI scripts and vRO workflows. I’m not sure what I will be committing this Commitmas but I plan to make it through the entire 30 days!

Get signed up on GitHub, join the vBrownBag events, and be social while you learn a new skill! I urge you to join the challenge with the community!


August 20, 2015
by zach

Unable to Expire or Power On vRA Managed Machine

Eric and I are deploying a distributed installation of vRealize Automation 6.2.2 with the help of a VMware architect on-site. We have progressed nicely through less than two weeks with the exception of some load balancing issues. Today we were deploying a VM into our environment and testing out different functions within the vRA interface. After a bit of testing, we were unable to expire or power on a vRA managed machine. Here’s where we ran into an issue.

A VM had been deployed by vRA and was online. I set the VM to expire. We checked the Requests tab to see if the request had successfully processed. It said it did but the VM never powered down. Also when viewing the VM within the list of Items in vRA, the status still reflected “On”.

Time to troubleshoot! we checked the Log under Infrastructure > Monitoring > Log. The following error was shown:


Workflow ‘FireVirtualMachineEventRequest’ failed with the following exception: The HTTP request is unauthorized with client authentication scheme ‘Anonymous’. The authentication header received from the server was ‘NTLM,Negotiate’. Inner Exception: The remote server returned an error: (401) Unauthorized.

After a bit of digging, the “Negotiate,NTLM” bit in the error was the key. We checked the Web server’s IIS Windows Authentication Providers. Negotiate was listed above NTLM, which was the incorrect order as shown.


After moving NTLM to top provider as shown,


make sure you restart IIS with “iisreset” in the command line. We then tested expiring the VM. It was successful!


Later when I attempted to power on the VM, I received the same error and the VM was never powered on. The status wasn’t expired, it was just powered off.

I then logged into my DEM Orchestrator servers and checked the same setting with the providers in IIS. Sure enough, Negotiate was listed above NTLM. I moved NTLM to the top and restarted the DEM-Orchestrator services.

Success! The VM powered on successfully!

The NTLM provider should have been ahead of Negotiate as we ran Brian Graf’s vRA 6.2 Pre-requisite Script, but for some reason they weren’t configured correctly.

March 6, 2015
by eric

Asynchronously remove datastores via vCO! (Updated)

Anyone with more than 3 hosts absolutely dreads removing data volumes from the VMware environment.  It is a mind-blowing tedious and redundant process that VMware has yet to fully address.  First you must unmount the volume(s) from all the hosts.  This part, thankfully, is easy, it just requires you to select the proper datastore, right click, and select ‘Unmount’.  A nice little wizard comes up and runs the appropriate checks to make sure the datastore can indeed be unmounted.  Just hit next and select the hosts you wish to unmount from and VMware kicks off the unmount procedure for that datastore on the selected hosts.

Well if you thought you were done and ready to unpresent that datastore, you are mistaken.  vSphere still sees that LUN and if you simply unpresent it from the hosts, they will really not like you one bit until you reboot them.  You must go to each host’s configuration page for storage adapters, find the correct LUN, right click and detach.  Here is one of VMware’s KB articles for those that need more information on the process.

Imagine the time it takes to go through 10 hosts, or how about 50 hosts without automation?

So…let’s fix that and automate the entire process via vCenter Orchestrator!  Here is a quick run-down of what the workflow does.  First thing you need to do when running the workflow is select the cluster the datastore is presented to.


After selecting the proper cluster and hitting next, you are presented with a dialog to select your datastore or datastores you wish to unmounts and detach from the hosts in the selected cluster.


After selecting the datastores, just hit “submit” and away it goes.  So what does it do?  Here is what the schema looks like for the workflow.


The workflow starts off by getting all the hosts of the cluster you select.  It then grabs the needed information from the datastore(s) and stores it in a couple of arrays to be used later.  Take a quick look at the actual scripting behind this.


It grabs the UUIDs needed for the unmount procedure and the Canonical NAA name for the detach sequence.  Who knows why VMware doesn’t allow these procedures to be done by just using just one of these variables, or at the very least fully documents the process, but this works…for now.

*Note:  you might need to adjust the SLICE number in your environment to grab the correct UUID.  14 is what works in my environment.

So after the workflow has the necessary info, it can proceed to the unmount loop.  We set the host to work with within the host array to the counter, then we kick off the unmount procedure that loops through each datastore in the datastore array that you selected and unmounts it on that host.  Here is the scripting code for that workflow.


After it has looped through all the hosts, kicked off the unmounts and they finish, the workflow exits the unmount loop, resets the counter, and then drops into the detach loop.  The detach loop has the same setup as the unmount loop, except it launches the detach workflow for each host instead of the unmount workflow.  Take a look at its scripting code.


Once the detach loop is complete and all detach operations have finished, the workflow exits the detach loop, kicks off a rescan for datastores on the hosts in the cluster to clean up the LUN paths, and then exits.

That is pretty much it, all this is done asynchronously on the hosts to save even more time.  Let me know what you think or if you have any questions.  Have fun tailoring this workflow for your needs!

You can find this workflow package on either Github or Flowgrab.



April 21, 2015 Update:  Updated workflow to 2.1.0 based on Jason’s feedback.  There is now a sleep timer of 15 seconds and an initial counter reset before the unmount.  The updated workflows were pushed to the links above.

March 1, 2015
by eric

Howdy all!

Thanks for the intro Zach.  I am both nervous and excited to start blogging.  I feel that it is time for me to make my appearance on the world-wide web in a more productive manner.  I have quite a few lofty goals this year, both personally and professionally, that could provide good writing opportunities as well as some comedic gold I am sure.  I tend to be light-hearted, but also don’t beat around the bush.  I am not afraid to call people, products, or companies out when they do questionable or flat-out dumb things.  So with that all said, let’s do this.  Head to the about page to read about me professionally and I will soon have a new post up that I hope you like.