Updated 5.22.19

Coming back from the Nutanix .Next conference two weeks ago, the biggest announcement that really got me excited as the ability for Nutanix Frame to run in AHV environments.   AHV comes as an additional environment to AWS where Frame started, Azure and Google Cloud, currently in early release.

I’ll be going thru a multi-part series around Frame and configurations use cases. So stay tuned!

(more…)

If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).

(more…)

Freedom to Choose… Freedom to Play… Freedom to Cloud….

I just returned from a week in New Orleans at the Nutanix .Next conference, where I was fortunte to represent eGroup as a partner as well as being part of the Nutanix Technical Champions group.

In addition to being a conference attendee, a co-worker Dave Strum  and I co-presented with one of our customers on the benefits of deploying Nutanix on Cisco UCS hardware, lessons learned and future plans.  It was fun and definintely not like your typical presentation.IMG_0830.jpeg

There’s a lot of blog posts and content around the .Next conference news (Plug for Dave here), and the Nutanix roadmap continues to dazzle and amaze people (ok, me especially) with simplicity, functionality and yes, Freedom.  Keyword here is Freedom.

And this post isn’t about recapping the .next conference, I’ll let my peers and friends handle that.  This post is about Freedom…

(more…)

I recently had the opportunity to deploy 12 Nutanix nodes for a customer across 2 sites (Primary and DR), 6 of which were 3055-G5 nodes with dual NVIDIA M60 GPU cards installed and dedicated to running the Horizon View desktop VMs for this customer. This was my first experience doing a Nutanix deployment using the NVIDIA GPU cards with VMware, and thankfully there is plenty of documentation out there on the process.

The Nutanix deployment with GPU cards installed is no different than without, you still go thru the process of imaging the nodes with Foundation just like you’d do without GPU cards. In this case, each site was configured with 2 Nutanix clusters, one for Server VMs and a second cluster specific to VDI. The VDI cluster was configured in a 3 node cluster, using the NX-3055-G5 nodes, running Horizon View 7.2.0 specifically.

I’ll touch on some details of the M60 card below, and then get into some of the places where I had a few issues with the deployment and how I fixed them, and finally some Host/VM configuration and validation commands.

(more…)

Well, it’s that time again… 2017 has come and gone, and sometimes I just don’t know where all the time went and what I was able to accomplish.

I’m happy to say that for the 2nd year in a row I’m part of a great group of people in the IT industry, those of us pushing the value of Nutanix and their simple, effective and scalable HyperConverged solutions.

Pretty cool in the large world of IT, to be a part of this small group of folks in the #NutanixNTC family, and especially joined by another eGroup Member, Dave Strum (http://vthistle.com) on this journey.

Thank you Nutanix for giving us an amazing platform to help our customers along on their journey, and I cannot wait what’s in store for 2018!

To read the full post about the 2018 Nutanix Technology Champions, follow the link below.

http://next.nutanix.com/t5/Nutanix-Connect-Blog/Welcome-to-the-2018-Nutanix-Technology-Champions-NTC/ba-p/26328

This week I had the pleasure of deploying 2 more Nutanix blocks on behalf of one our partners, who is now starting to highly recommend Nutanix for their customer deployments of critical systems.

The installation was pretty vanilla, 3 NX-1065-G5 nodes at the Primary site and matching at the DR site.  For the VMware components, we went with the vCenter 6.5 appliance (I love the stability and speed of the 6.5 appliance by the way), and for the ESXi hosts we went with 6.5 (build 4887370).

The install went great, super fast and easy as always is the case with Nutanix deployments, and off we were rolling for customer deployment.

After running the command ncc health_checks run_all post install (running ncc version 3.0.4-b0379d15 for this), I noticed that the results were calling out 3 hosts for having disabled services.

Detailed information for esx_check_services:
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog

After doing some research on why the sfcbd-watchdog wasn’t starting – and trying to start it manually, I came across this KBase from VMware, which detailed that this was expected behavior starting in ESXi 6.5.

Wondering if the NCC code just wasn’t updated for this specific change from VMware, I checked the Nutanix Knowledge Base, and came across this link which details that services identified by the ncc health_checks hypervisor_checks esx_check_services command should be enabled.

Ok, so that makes sense… ESXi 6.5 has been out long enough to assume that the ncc scripts have been updated to accomodate the 6.5 changes.  So time to get the service re-enabled, and check ncc again.

To enable the service on a ESXi 6.5 host, use the command esxcli system wbem set –enable true (be sure to use double hypens!).  Per VMware, if a 3rd party CIM provider is installed, sfcbd and openwsman should start automatically.  Just to be safe I also ran /etc/init.d/sfcbd-watchdog startfollowed by /etc/init.d/sfcbd-watchdog status to make sure my services started.

So let’s see what we get now after running the ncc checks again, using the command ncc health_checks hypervisor_checks esx_check_services to simplify my results.

Results look much better, no more warnings about disabled services on the ESXi hosts.

Running : health_checks hypervisor_checks esx_check_services
[==================================================] 100%
/health_checks/hypervisor_checks/esx_check_services [ PASS ] 
-------------------------------------------------------------------------------------------------------------------------------------------------------+
+---------------+
| State | Count |
+---------------+
| Pass | 1 |
| Total | 1 |
+---------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log

Good to know that VMware purposefully has disabled this service, and it’s easy to put that in a checklist for future deployments.   I do wish though that since Foundations is taking care of the ESXi install and customization, they would add those 2 cli commands to the routine to make those services start, if they truly are needed.

Hope this helps if you run into this same issue!