Automating the World Around Me

March 6, 2015
by eric

Asynchronously remove datastores via vCO! (Updated)

Anyone with more than 3 hosts absolutely dreads removing data volumes from the VMware environment.  It is a mind-blowing tedious and redundant process that VMware has yet to fully address.  First you must unmount the volume(s) from all the hosts.  This part, thankfully, is easy, it just requires you to select the proper datastore, right click, and select ‘Unmount’.  A nice little wizard comes up and runs the appropriate checks to make sure the datastore can indeed be unmounted.  Just hit next and select the hosts you wish to unmount from and VMware kicks off the unmount procedure for that datastore on the selected hosts.

Well if you thought you were done and ready to unpresent that datastore, you are mistaken.  vSphere still sees that LUN and if you simply unpresent it from the hosts, they will really not like you one bit until you reboot them.  You must go to each host’s configuration page for storage adapters, find the correct LUN, right click and detach.  Here is one of VMware’s KB articles for those that need more information on the process.

Imagine the time it takes to go through 10 hosts, or how about 50 hosts without automation?

So…let’s fix that and automate the entire process via vCenter Orchestrator!  Here is a quick run-down of what the workflow does.  First thing you need to do when running the workflow is select the cluster the datastore is presented to.


After selecting the proper cluster and hitting next, you are presented with a dialog to select your datastore or datastores you wish to unmounts and detach from the hosts in the selected cluster.


After selecting the datastores, just hit “submit” and away it goes.  So what does it do?  Here is what the schema looks like for the workflow.


The workflow starts off by getting all the hosts of the cluster you select.  It then grabs the needed information from the datastore(s) and stores it in a couple of arrays to be used later.  Take a quick look at the actual scripting behind this.


It grabs the UUIDs needed for the unmount procedure and the Canonical NAA name for the detach sequence.  Who knows why VMware doesn’t allow these procedures to be done by just using just one of these variables, or at the very least fully documents the process, but this works…for now.

*Note:  you might need to adjust the SLICE number in your environment to grab the correct UUID.  14 is what works in my environment.

So after the workflow has the necessary info, it can proceed to the unmount loop.  We set the host to work with within the host array to the counter, then we kick off the unmount procedure that loops through each datastore in the datastore array that you selected and unmounts it on that host.  Here is the scripting code for that workflow.


After it has looped through all the hosts, kicked off the unmounts and they finish, the workflow exits the unmount loop, resets the counter, and then drops into the detach loop.  The detach loop has the same setup as the unmount loop, except it launches the detach workflow for each host instead of the unmount workflow.  Take a look at its scripting code.


Once the detach loop is complete and all detach operations have finished, the workflow exits the detach loop, kicks off a rescan for datastores on the hosts in the cluster to clean up the LUN paths, and then exits.

That is pretty much it, all this is done asynchronously on the hosts to save even more time.  Let me know what you think or if you have any questions.  Have fun tailoring this workflow for your needs!

You can find this workflow package on either Github or Flowgrab.



April 21, 2015 Update:  Updated workflow to 2.1.0 based on Jason’s feedback.  There is now a sleep timer of 15 seconds and an initial counter reset before the unmount.  The updated workflows were pushed to the links above.

March 1, 2015
by eric

Howdy all!

Thanks for the intro Zach.  I am both nervous and excited to start blogging.  I feel that it is time for me to make my appearance on the world-wide web in a more productive manner.  I have quite a few lofty goals this year, both personally and professionally, that could provide good writing opportunities as well as some comedic gold I am sure.  I tend to be light-hearted, but also don’t beat around the bush.  I am not afraid to call people, products, or companies out when they do questionable or flat-out dumb things.  So with that all said, let’s do this.  Head to the about page to read about me professionally and I will soon have a new post up that I hope you like.

February 27, 2015
by zach

.Net 3.5 Feature Install Fails on Windows 2012

Recently, I ran into an issue where the .Net 3.5 Feature install fails on Windows 2012. Many search engine searches, blog posts, and message board posts later, I found a solution.

As you may know Windows Server 2012 comes with the .Net Framework 4.5 feature preinstalled. It does not have the .Net Framework 3.5 feature installed. Normally, it is an easy process to add the feature – Server Manager->Add Feature->Check the .Net Framework 3.5 feature->Install. But what if the server you are attempting to install .Net 3.5 onto is not allowed to connect to the Internet? If it is a VM, quickly attach a Windows Server 2012 .iso and specify an alternate source path and point it to “[DVD Drive Letter]:\sources\sxs” and it installs, right?

Well not every time. Most of the servers I have come across, usually fresh builds, attaching the ISO and specifying it as a the alternate source path does the trick. But I have found a couple servers that will fail indicating that the correct files are not in the attached .ISO, even though they are. The specific error I received was 0x800F081F. .Net 3.5 Feature install fails on Windows 2012











I finally found the correct KB article that outlined the correct issue with a resolution. I found numerous other reasons why .Net 3.5 wouldn’t install but this was the cause. I also found that if any language packs were installed prior to trying to install the feature, it would also fail. Any language pack installed, needs to be uninstalled and then the feature enabled, then the language pack(s) can be reinstalled.

The KB article points out that if either KB2966827 or KB2966828 were installed on the system, the .Net Framework 3.5 feature installation would fail, regardless of where the source of the files were. I downloaded the fix and installed it on the server (no reboot required!), and the feature was enabled without issue.

February 27, 2015
by zach

vCO 5.5 Appliance Access Permissions

I hadn’t worked too much with the “Copy file from vCO to guest” workflow until the past six months. I quickly ran into issues with the default vCO 5.5 appliance access permissions settings. When I first tried to use it, I created a folder named “vcofiles” in the /opt/ directory on the vCO appliance based on a guide I was following. The had copied the file I wanted to transfer to multiple guests up to the /opt/vcofiles/ directory on the vCO server and gave root 777 rights to the vcofiles directory and the individual files. I kicked off the workflow and received the following error:

vCO No Permissions!

So I went back and checked to ensure I gave it full 777 access. I had. I then researched a bit more and found that the js-io-right.conf file needs to be edited to allow vCO rights to the new directory I created. Nick Colyer had a good post on what needed to be done over here and there was also a VMware KB article about it. If you check out the KB, you will notice that this applies for version 4.2.x and 5.1.x, but not 5.5.x. Of course, I was using the 5.5 appliance. The meat and potatoes of both articles still hold true. The only difference I have found is the new location of the js-io-rights.conf file in the 5.5.x appliance.

5.1.x and older location: /opt/vmo/app-server/server/conf/
5.5.x+ location: /etc/vco/app-server/

I added read, write, and execute permissions (+rwx) permissions to my new directory. After I finished, here’s what my settings looked like:

vCO 5.5 Appliance Access Permissions





As you can guess, this is done to ensure that the application that the users are accessing from the vCO client can only access directories that are specifically defined by the vCO admin.

February 24, 2015
by zach

Welcome Eric!

A past co-worker of mine, Eric TeKrony, wanted to not only jump into vCO more after I left but he also wanted to contribute back to the community. So far we have combined forces on the Get-VM GitHub organization and have uploaded a few of our vCO workflows and actions. Along with uploading resources to GitHub, he may be on here from time to time releasing resources or just documenting an issue he found and resolved.

Along with vCO, he has extensive knowledge in many other realms of IT. I’m excited to see what he can bring to the community through this blog and other avenues.

So welcome Eric!

January 17, 2015
by zach

vCO Workflow – Update PernixData Host Extensions

PernixDataBefore I get into this workflow, if you have not tried PernixData’s FVP in your environment, it is a must. All you need is a couple SSD drives and download a free trial from their site and you can begin seeing the advantages quickly. It not only speeds up your VMs but gives your array a break! Now to the goods.

PernixData is installed inside the vSphere host as a host extension. Unfortunately, the upgrade process is not as streamlined as the rest of their product’s experience. It requires us to upload the upgrade zip file to each host, put each host into maintenance mode, and run a few commands through the shell of each host. Definitely a repetitive and lengthy process if you have numerous hosts.

I took a look at the official documentation, available within the support portal on PernixData’s website, and determined I could quickly put together a vCenter Orchestrator, soon to be vRealize Orchestrator. Not only could I automate the upgrade process on a single host, but I could do it an a cluster level. I figured this is appropriate as PernixData should be deployed at a cluster level to take full advantage of the technology without limiting the agility of the VMs within the host. Below I will walk you through the process.


Make sure you have read the prerequisites before kicking off this upgrade process. No VM can be accelerated by FVP during the upgrade process so they need to be put into Write-Through mode. On the opposite end of this process, don’t put the VMs back into Write-Back mode until all hosts are upgraded and confirmed in working order.

Once you are ready to commit to the upgrade, my workflow requires you to upload the upgrade zip file to the “/opt/vcofiles/” directory on the vCO appliance. If this directory does not exist, please create it or modify the workflow to look elsewhere. If you are not using the vCO appliance, I recommend it over the Windows server install, especially if you are using the vCO service that is installed with the Windows installed version of vCenter. You could modify the script to look at a different location, like a Windows directory, if you choose to make it work that way. The workflow will pull the zip file from the specified directory and scp it to each host as it upgrades each host in serial. Now you are ready to run the workflow.

You will be prompted to select your vCO appliance and a cluster of hosts to upgrade.

Select Environment Variables

Then you have the option to upgrade all hosts in the cluster or a selection of hosts. You may want to select a single host to test out the process if you’d like or even in the event that a host has an issue with an upgrade, you can then select the remaining hosts in a future pass.

Select Hosts
Next you will enter the filename of the upgrade zip. Be sure to include the .zip file extension.

PernixData Upgrade File

On the next screen, enter the credentials for the host that you would enter if you would be upgrading FVP manually.

Host Credentials

On the last page, enter the credentials of the vCO appliance. Then kick off the workflow.

vCO Credentials

Heavy Lifting

The workflow will gather all of the hosts you have approved for the upgrade and put them in an array. It will then select the first host, put it into maintenance mode, turn on SSH, upload the zip file to a temporary directory on the host, then send the following PernixData supplied command to uninstall the current host extension:

cp /opt/pernixdata/bin/ /tmp/ && /tmp/

Once complete, it will then run the following install command:

esxcli software vib install -d /tmp/<upgrade filename>.zip

The workflow will clean up after itself and remove the upgrade zip file and file from the /tmp/ directory. The host will then be taken out of maintenance mode and SSH turned back off.

Full Schema

Below, I have included a picture of the full schemas from vCO. This shows the schema for the cycle of host upgrades.

Cluster Array

The following schema shows where the real work goes on.

Work Schema

As you can see there is some error handling. I discovered a couple returns that vCO believed to be “erroneous” but after I checked and confirmed with PernixData support, they were false positives.

Even though this workflow has worked in my environment, it does NOT mean it will work in yours. Make sure you read PernixData’s official documentation and know the process as well as comb over the workflow itself to ensure it won’t cause issues within your environment. Use at your own risk and remember, I am not responsible if this workflow causes issues within your environment.

I have uploaded the orchestrator package to my Github page. If I make any changes to the workflow, both pages will be upgraded with the latest version. Automate all the things!!

September 8, 2014
by zach

HA Agent Alerts and Issues

This past week, I have run into two different HA Agent alerts and issues that have thrown up alerts or caused me some administrative headaches. As a reference point, we are running vSphere/vCenter 5.1 but I feel these issues affect a broader range of products based on the KB articles I have come across.

Issue 1: Within one of our clusters, a VM was rebooted by HA because of a backup issue. I’m thankful that HA saw the issue and rebooted the VM. So quickly that our monitoring solution didn’t even notice downtime. That’s great! The alert was thrown at the cluster level for obvious reasons and put a yellow alert banner on the Summary tab of the cluster, not in the Alarms tab. The yellow banner indicated that “HA initiated a failover in <cluster> on <datacenter>”. I don’t see an alarm in the Definitions that is specific to this alert as it was for a single VM. I guess that is why it wasn’t displayed in the Alarms tab. Now how do I acknowledge and clear the alert? I discovered a KB article (2004802) and it describes my issue exactly. The cause is written as:

This issue occurs when a HA failover event occurs in the cluster, which triggers the warning message. This locks the warning message and prevents it from being removed.

I don’t like the last sentence of that cause. Why lock it? Let me acknowledge and clear the warning. As described in the resolution, I disabled HA on the cluster and re-enabled it. The alert was gone as expected.

It looks like this affects vCenter 4.0-5.5. I assume this is not seen often as clearing an alert in this manner is downright inefficient.

 Issue 2: During a troubleshooting session with VMware support, I was asked to reboot a host. No big deal. After our troubleshooting completed, I noticed that DRS was not migrating the VMs back to the rebooted host. I attempted to manually vMotion a VM to the host in question but the wizard indicated that the HA Agent on the host was “Unreachable.” I did a quick search and found the following KB article (2011192). The symptom description was word for word what I was seeing from the host.  Some relevant notes:

1. The host was accessible by vCenter
2. This host was the only host showing these symptoms.
3. All hosts and vCenter reside in the same VLAN.

I attempted the following to resolve the issue with no luck:

1. “Reconfigure for vSphere HA” on the host.
2. Restarted management agents on the host.
3. Rebooted the host again.

In the KB article, in mentions to restart the vCenter service. I felt this was overkill as the issue was isolated to a single host so I did not perform that troubleshooting step. Much of the resolution steps in the KB article talk about the host as Not Responding, but this was not the case.

In the end, I disabled HA at the cluster level and then re-enabled it. After that, all of the HA Agents on each of the hosts in that cluster reported back correctly.

**When in doubt, just disable HA and re-enable it across the cluster. In the vCenter HA world, it is the equivalent to rebooting a computer to clear any weird issues.**

September 8, 2014
by zach

EVO:RAIL – My thoughts



At VMworld last month, VMware revealed Project MARVIN as EVO:RAIL. This is VMware’s entry into the hyper-converged space. Companies like Nutanix and Simplivity have made waves with their product offerings making it easier for companies, small and large, to deploy a virtual infrastructure. Whether or not companies have bought into this way of deploying infrastructure, most have looked into it.

EVO:RAIL is a new way of deploying hyper-convergence that is not directly sold by VMware but rather the partnered vendors that have manufactured the physical appliance. “One throat to choke” is the name of the game here. Every bit of this appliance will be supported by calling a single number.

ROBO – With a few configuration parameters entered by an admin, the appliance sets itself up quickly and provides an easy to use interface for even a novice admin. I believe this is a perfect product for ROBO (Remote Office/Branch Office). My experience with determining specs, deploying, and training on-site ROBO staff, the RAIL would have been a great product to implement. Many of these smaller ROBO’s staff that I have worked with were just learning about virtualization. Changing their mindset of what is possible and then teaching them how to use the new technology in a short amount of time on-site can be challenging. Based on the videos I have seen (link) showing the implementation of a single and multiple EVO:RAIL appliances, going on-site to train the staff could be optional.

Enterprise? – Obviously I feel good about the EVO:RAIL being a ROBO solution but I am definitely not sold on it being deployed in an enterprise datacenter. One of the big reasons I feel this way is the integration with current deployments. I’ve seen some discussion about the possibility of integrating it into an already deployed VSAN environment. I saw that it is “technically possible” but I gathered from the hesitant responses that it should not be done. Therefore, an enterprise could use the EVO:RAIL for a specific use case like VDI or even an easy way to segregate a workload for a division/group within the organization. There are limitations on how it can be deployed but remember, this is a hyper-converged appliance and is not meant to be integrated in with our traditional infrastructure.


EVO:RAIL’s codename MARVIN logo

UI – RAIL has its own UI to administer the environment instead of using the normal vSphere/vCenter clients, and by doing so VMware has reduced the complexity of the environment dramatically. The UI runs purely on HTML5 which is a big improvement over the vSphere Web Client that well love to hate. I assume the vCenter 6.1/6.5? version of the web client that will be forced down our throats will run on HTML5. Maybe we won’t mind that web client! VMware should definitely be taking the UI team from EVO:RAIL and reassign them to the vCenter Web Client to perform a 100% rewrite.

I’d love to get a shot at playing around with one of these appliances and working with others to deploy it for a specific use case. In the long-term, it will be interesting to see how not only VMware (and the EVO partners) but Nutanix and Simplivity will address upgrading to newer appliances as the hardware bought today will be aged in a few years.



May 25, 2014
by zach

Long time, no blog post….

It has been over a year since I last posted on here. Quite the break. Not really a break I guess. More like being crazy busy and lazy at the same time. Is that a thing? If so, that’s me. The year of 2013 was the most crazy year of my life. It began with a nice promotion with a new title, Senior Systems Engineer. Nothing really changed except for that and some additional pay. That was April.

May rolled around with plenty of prep for the wedding. Oh yea, did I mentioned I got hitched to the love of my life/best friend? We got married in June, in Kauai. Best two week vacation, ever. July rolled around with the reception back home which was a ton of fun. If you’re looking to get married in the future, I highly recommend a destination wedding. Totally worth it.Back from Hiatus!

Next up, August. My house went up for sale. Sold in 46 hours. Then we found a house we had been searching for in a awesome part of town. LOCATION! LOCATION! LOCATION! Put a bid in on that house and then quickly put up my wife’s house for sale. Sold her house in 10 days flat. After three weeks of battling for the house we really wanted, we locked it in and never looked back. Moved in during October. August-November included too much real estate talk with a sprinkle of VMworld 2013.

Let’s jump to 2014. I became more open to other companies that I could potentially leave IPG for. A few opportunities came and didn’t feel energized about them. Jelecos on the other hand had me sold from the first interview. So I went for it. I’m about 3 weeks in. I like the direction of the company and where they want to be in the coming years. I feel I can make an impact here which is a great feeling. Jelecos also has a great bunch of people so that makes it even better.

I was also awarded vExpert for a second year in a row. Thanks to Corey from VMware for heading up the program this year. Sad to see Troyer leave but we all leave at some point. The program is definitely in good hands with Corey.

As I will be working more with vCO, vCOPS, vCAC, etc I plan to be posting more on here about my experiences and even throwing up some scripts or workflows for others to check out and use in their environment. Hopefully my post isn’t a year from now. I’ll make sure it isn’t.



May 2, 2013
by zach

Too Many Groups (Another Tale of Being Half-Baked)

After months of waiting for VMware to make Update 1 available for vSphere/vCenter 5.1, it finally arrived. We had hoped that it would provide fixes to some half-baked items that we had noticed after deploying vCenter 5.1. As of right now, I personally can’t say if those issues or annoyances we found have been fixed or not.

Unfortunately, I can’t login to the “preferred” web client that VMware wants us to adopt so bad.


According to KB article 2050941, my admin account that I login to vCenter belongs to too many groups in Active Directory. Are you kidding me? It says that there is not a definitive number of groups that is the threshold but is normally around 19. I belong to 24 while my co-worker that can login belongs to 20. Clearly our threshold is somewhere in there. My question is how long has VMware been running 5.1 U1 in their labs and somehow never noticed this issue?

There are three workarounds for this issue.

  • Log in to vCenter Server via the vSphere Client using the Use Windows session credentials option. – So now I need to use a client that doesn’t include the new 5.1 features?
  • Work with your Active Directory administrator to modify the group membership of the vCenter Server login account to a minimum. – hahaha! There’s a reason why I belong to so many groups. My day-to-day activities depend on those memberships.
  • Limit the number of domain based identity sources to no more than one. – We have users from around the world logging in that need those identity sources available. Odds are most of them can’t login either though.

Yet again, VMware has released more software/updates that seem to be half-baked and not fully tested for even the largest of their customers. This just adds more fuel to the fire that is pushing us to really consider Microsoft’s latest Hyper-V release. Twelve hosts yet to be ordered this year for a refresh of old vSphere hosts in our environment. Maybe they will be Hyper-V hosts instead.