It’s been a while since I’ve gotten any fresh content on this blog, hopefully I’ll get some content ideas to keep a regular cadence of updates going.

While I was updating the cabling on the garage lab, I realized it had been a while since I had done anything on my CE lab from a version perspective, in fact the last update I had done was March of 2019.  So I figured now was as good a time as any to go ahead and upgrade the CE cluster.

Much like the luck I usually have, I went to upgrade CE thru Prism, and the upgrade seemed to fail with a USB that was corrupted – seems like my USB luck continues on having really crappy USB drives.  So, I went over to Best Buy, bought a few $9.99 64GB PNY USB Drives, came home and started the process to get the image file over to USB, since the CE .iso installer still hasn’t made it’s return.

All was going well, until the hosts booted up.  Now my hosts are a bit long in the tooth, but they are still decent enough with 24 cores and 48gb ram.  This PNY USB drive was HORRIBLY slow, so much so that I couldn’t stand it.  Never again will I buy PNY drives.

So, I thought about what other options do I have.  The drive configuration on these CE nodes was as follows:

  • 1x 256GB Samsung EVO SSD
  • 1x 500GB Samsung EVO SSD
  • 1x 1TB Samsung EVO SSD
  • 1x 1TB Western Digital HDD

So, I figured, why not try to use the 256GB SSD as the boot drive, instead of a USB drive. My Supermicro hosts are old enough that a Satadom might be hard to come by for it, and I honestly had more than enough space on each node, that the 256GB drive wouldn’t hurt too bad.

So, I pulled the drives out of the drive caddy’s, pulled out my trusty Inatek USB Drive Caddy, and proceeded to drop the CE .img file onto the 256GB SSD, using the gdd commands I prefer over the dd command.

Imaging done, and once I correctly set the BIOS on the Supermicro hosts to use the 256GB drive in Port 0, I booted  up each of the hosts and much to my happiness, I was able to get the install to go thru, CVM deployed and cluster created.  And the speed of install was as you’d expect much better!

So in hindsight, with my dislike for USB boot, I wish I had thought of using the internal drive with the .img file.  I did this when the CE .iso installed allowed you to select a boot drive, but for some reason always tried to get the usb drive to work.

So, now I’m happy to say I’m not using the USB drives anymore, have a sturdy SSD drive for my CE boot drive without having to give up much space at all.

2020-02-17_16-18-41.png

Updated 5.22.19

Coming back from the Nutanix .Next conference two weeks ago, the biggest announcement that really got me excited as the ability for Nutanix Frame to run in AHV environments.   AHV comes as an additional environment to AWS where Frame started, Azure and Google Cloud, currently in early release.

I’ll be going thru a multi-part series around Frame and configurations use cases. So stay tuned!

(more…)

If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).

(more…)

The Nutanix Teams released the 2019 Nutanix Technical Champion (NTC) listing, and for the 3rd year in a row I’m honored to be part of this group of 129 professionals around the world help evangelize, implement and support the Nutanix platform for customers, partners and every group in between.

It’s a group of people that I’ve been able to learn from the shared experience, and hopefully be able to provide a bit of experience back to. The Nutanix platform has provided me and the rest of my Data Center team at eGroup the ability to bring solutions to our customers that provide speed and certainty, simplicity and elegance to IT, rather than complex hardware and solutions that either the customer will never fully utilize or will never be able to fully administrator.

Also cool to see my friend and teammate Dave Strum also on the list for another year! Good job my man!

Here’s to a great year in 2019 with Nutanix!

Thank you to everybody on the NTC Channel, my friends and peers at Nutanix and everybody at eGroup who allows us to work with these solutions for our customers.

See the full release here:
https://next.nutanix.com/blog-40/welcome-to-the-2019-nutanix-technology-champions-ntc-31459

 

Freedom to Choose… Freedom to Play… Freedom to Cloud….

I just returned from a week in New Orleans at the Nutanix .Next conference, where I was fortunte to represent eGroup as a partner as well as being part of the Nutanix Technical Champions group.

In addition to being a conference attendee, a co-worker Dave Strum  and I co-presented with one of our customers on the benefits of deploying Nutanix on Cisco UCS hardware, lessons learned and future plans.  It was fun and definintely not like your typical presentation.IMG_0830.jpeg

There’s a lot of blog posts and content around the .Next conference news (Plug for Dave here), and the Nutanix roadmap continues to dazzle and amaze people (ok, me especially) with simplicity, functionality and yes, Freedom.  Keyword here is Freedom.

And this post isn’t about recapping the .next conference, I’ll let my peers and friends handle that.  This post is about Freedom…

(more…)

I recently had the opportunity to deploy 12 Nutanix nodes for a customer across 2 sites (Primary and DR), 6 of which were 3055-G5 nodes with dual NVIDIA M60 GPU cards installed and dedicated to running the Horizon View desktop VMs for this customer. This was my first experience doing a Nutanix deployment using the NVIDIA GPU cards with VMware, and thankfully there is plenty of documentation out there on the process.

The Nutanix deployment with GPU cards installed is no different than without, you still go thru the process of imaging the nodes with Foundation just like you’d do without GPU cards. In this case, each site was configured with 2 Nutanix clusters, one for Server VMs and a second cluster specific to VDI. The VDI cluster was configured in a 3 node cluster, using the NX-3055-G5 nodes, running Horizon View 7.2.0 specifically.

I’ll touch on some details of the M60 card below, and then get into some of the places where I had a few issues with the deployment and how I fixed them, and finally some Host/VM configuration and validation commands.

(more…)