Americas

  • United States
by Brendan Allen

Review: VMware vSphere gets much-needed facelift

Reviews
Mar 27, 201715 mins
Desktop VirtualizationVirtualizationVMware

vSphere 6.5 is easy to install, sports a new VM management console and takes a stab at Docker integration.

virtualization
Credit: Thinkstock

VMware’s vSphere 6.5 virtualization platform fixes many issues we had with vSphere 6.0 and delivers significant improvements in management, security and high availability. There are also first steps toward vSphere-Docker integration.

Upgrading and in some cases installing vSphere 6.0 was arduous; the good news is that is no longer the case. Our Lenovo server upgraded with ease. Our HPE Gen8 and Gen9 servers required a different ISO, but installed without drama. An ancient Dell 1950 appears not to be able to upgrade past 6.0, so be sure to so check moldy hardware for compatibility.

The biggest news for day-to-day administrators is the new vCenter Server appliance, and it’s breathtakingly more evolved than prior versions. It’s worth the price of the upgrade for the sheer functional and aesthetic value.

The vCenter app, formerly run under or as an appliance VM, is the administrative heart of vSphere. The spot on the wall marked vSphere 6.0, where we beat our heads against it, will be painted over.

Project Photon, which brings the first big step in vSphere-Docker integration, shows the world a production-ready way for Docker and presumably other containers to live in the VMware world.

While there’s still lots to be done, VMware does container hosting better than Windows 2016 Server editions.

NET RESULTS

vCenter Server appliance

There are many changes in vSphere 6.5, but since vCenter is the administrative console, we gave it the most attention. It was a near-total remake, and despite the massive changes, it was comparatively bug-free and smooth in operation.

It’s also simpler in some ways. As an example: The new vCenter appliance has an Update Manager built-in. This means no more installing a separate Windows VM and then installing the Update Manager on that and linking it with vCenter.

It is also possible to upgrade from a working vCenter 6.0’s appliance easily, unlike the 5.X-6.0 update. We deployed the vCenter 6.5 appliance, set a temporary IP address for it, pointed it to the old vCenter appliance/server, reconfigured the new appliance with old appliance’s settings, shut down the old one and started up the new one.

It’s vastly simpler and far less gruesome than prior editions, and the installer for the appliance now works on Mac and Linux (of course, it still runs on Windows).

The vCenter appliance also includes a revamped web-based management UI on Port 5480 including updated views for host and memory, database and networking. The appliance can be updated from this user interface. Prior to this, a long login process was required that drove certain browser configurations nutty, largely from the ill effects of using Adobe Flash. Flash, however, still remains.

The new vCenter appliance also includes native VCSA high availability and file-based backup and recovery using any of scp/ftp(s)/http(s) protocols. Because vCenter appliance outages can render data centers without a control plane, it’s become somewhat critical to keep vCenter alive and accessible — and now, it’s much easier.

In testing, we found the vCenter HighAvailability/HA pretty easy to set up if the vCenter appliance is on the same cluster where vCenter HA is needed. We used a supplied wizard to create it. The wizard will clone the vCenter server appliance to make a slave appliance and then create a witness appliance on a different port group from the management network.

The separate connection circuit helps to ally communications if the primary port group and settings go foul. Initiating the failover can be triggered manually, and here’s what it looks like.

VMware also made it far simpler to backup and recover a vCenter appliance, and it’s built-in– requiring no snapshots or cloning, which is problematic in prior versions because of infrastructure change syncing. And the settings/configuration can be backed up using an ftp(s), scp, or http(s) action.

In a nod to security, we found that backup data can be encrypted with a password, although not with a certificate. To restore from the backup, we started from the vCenter appliance installer and went from there. However, in case HA is in use on the appliance, it has to be disabled before restoring configurations, which will then be replicated to HA slaves/witnesses.

Enter PhotonOS

The vCenter 6.5 appliance is now running atop PhotonOS instead of SUSE Linux Enterprise Server edition (SLES). PhotonOS is also the base for VMware’s container control plane. This makes VMware less dependent on SUSE for updates at the price of needing to point the finger of blame only at themselves, as VMware maintains PhotonOS.

For those using an all-Windows-based vCenter deployment, it is now possible to move over to the appliance with the “vCenter Server Appliance Migration tool”. And since the Update Manager is now built in to the appliance, it also benefits from the native HA that can be enabled in the vCenter appliance.

Update Manager is enabled by default so you can get patches and upgrades for your ESXi hosts right away.

Because most everything is now built-in to the appliance, it has been optimized, VMware claims “2x increase in scale and 3x in performance”. It’s difficult to measure this claim. The UI loads far faster, on more browsers, and is “snappier”, but these aren’t quantifiable in our admittedly small test environment.

The vCenter webUI doesn’t require browser plug-ins, which speeds initial access immensely, although it still needs work.

You don’t necessarily need the Adobe Flash that was required to access vCenter before, but we found there is still a lot of functionality missing from the web-based client formerly anchored in the Flash app. Sure, starting, stopping, creating (limited) VMs is nice and all but why can’t the webUI configure datastore clusters or monitor Cluster Resource Utilization or deploy OVF/OVA templates or migrate a VM to another datastore?

There are so many things missing from the webUI that it’s hard to recommend using at this moment, save basic tasks.

On the plus side, the HTML5 UI can be used on any modern web browser with no plugins required, even a smartphone, tablet, or — how is your refrigerator at browser access? Gotta give it bonus points for that.

Certain administrators will be able to use this most of the time if their job doesn’t require messing with the vSphere infrastructure itself. (find the long list of unsupported features…)

If your browser supports Adobe Flash, there are some new features in the Flash-based UI that are nice. It’s no longer necessary to install a client-integration plugin (in 6.0 you had to install another plugin for your browser to add functionality to the Flash client like uploading files to a datastore, or OVF/OVA deployment). Now that is all built-in or uses native web browser commands.

VSphere 6.5 supports new simplified REST-based APIs to manage VMs. The idea is that an administrator can do more with fewer lines of code, scripting it all with REST. There is an optional web-based interface to see the API and it permits trying out different REST commands. Most all of the REST commands supported by the API can be tried there. Logging in to the API Explorer with your vSphere credentials will provide you with the most options.

Resource Management – HA and DRS

Resource management is hardware vendor-dependent (requires a third party plug-in) and is a Proactive High Availability feature, which integrates with a vendor’s monitoring software. We were unable to obtain needed plug-ins for our hardware for this review.

Server stats are sent to the Data Resource Scheduler in vCenter and from there, the DRS makes decisions based on the information it receives. These in turn, optimize VM resource utilization.

There is a new server host health state called quarantine mode. It is slightly different than vSphere’s traditional maintenance mode where the host becomes completely unusable. In quarantine mode, the server can still be used, though at a reduced capacity.

If a server is placed in quarantine mode, the DRS will not move new VMs to the server from other servers if it will affect VM performance or no DRS rules are violated. However, if some rules determine there is an impact on the performance, then some VMs may be migrated to another host in the cluster for optimization, according to the rules set by an administrator, subject to the constraints of the plug-in provided (and its logic).

+ ALSO ON NETWORK WORLD VMware embraces containers with latest vSphere, Virtual SAN updates  +

These different failure conditions can be edited in the third-party plugin for the webUI, which include various hardware failure monitoring options to determine when to go into quarantine mode.

vSphere HA Orchestrated Restart

New and very useful is that it’s possible to create dependency chains for HA for the VMs when HA restarts the VMs from hosts that have failed.

For example, if you have a web app that relies on a database and web server, you would probably want the database VM to start up first and finish initializing and then have the web server start up next. This can be done with more than two VMs.

Scripts no longer need to be generated waiting for dependencies to come alive and be vetted so that chains of inter-dependent VMs become functional, although we found there can be gaps in this functionality.

The vCenter Auto Deploy function (using network boot for ESX for VM provisioning) is simpler to use now that it has a GUI where previously everything had to be done using the PowerCLI command line interface.

Now it’s possible to create deployment rules or custom ESXi images directly in the vSphere client. This will be useful for organizations with a large number of ESXi hosts, or those having need for PxEboot replacements.

Disk Level Security and Encryption

The vSphere 6.5 infrastructure supports new VM level disk encryption, but there’s a catch — it required us to have a third party KMIP 1.1-compliant key manager. As we don’t currently have one and it’s an extra-cost option, we didn’t test this feature, but found it intriguing.

Where a compliant key manager is present, we would be able to encrypt any VM disk and the encrypted disk can be managed using the storage policy framework.

This requires no changes to vMotion, a process that moves VMs among server hosts, usually LIVE. The encryption is transparent, although encrypted VMs require adapted access to externally manipulate them.

We found that VMware finally supports a new secure boot model that works with UEFI Secure boot for VMs. There is a simple checkbox to enable this in VMs, however, most VMs will likely have to be reinstalled when going from using BIOS to UEFI firmware does not go smoothly.

Secure boot must be supported in the guestOS/targeted VM itself as well, as VMware vSphere 6.5 doesn’t magically add it to a VM. Some OSes that are known to include Secure Boot support are Windows 8, 10, Server 2012/2016, PhotonOS, RHEL/CentOS 7, Ubuntu 14.04, 16.04 and ESXi 6.5. ESXi hosts can also boot using Secure Boot as long as UEFI firmware is available on the physical server. The ESXi kernel has added “cryptographic assurance of ESXi components”.

ScoreCard

Tote that log

A prayer said many times by VMware admins and security people asking for more detailed error messages has been answered. There’s a lot going on in sophisticated installations, and failures can occasionally be gruesome to fathom beyond troubleshooting notes.

Logs now show more information about what actions users do, when they did it, where they did and who did it. As an example, if a VM was reconfigured in a previous version, it might say something like “Virtual machine reconfigured.”

But now, it will have extra data showing before and after states, such as when changing from one VM network to another, it will display what device has changed, what network it changed to (and what it was originally), who modified it and so on. A lot more detailed information can be gathered not just for auditing purposes but this also can help with troubleshooting.

Some of the logging also identifies that VMware is becoming detailed in its metadata tracking, which will perhaps lead to a promise they made at VMworld 2016 regarding ultra-portability for workloads from one data center, or even one cloud to another. They’re keeping track of much more, and much more tracking will be needed to make portable the VM slipperiness they’ve promised.

New Support: vSphere Integrated Containers(see screenshots s012 – s017)

Based on the tenets of Project Photon, a Linux distribution amalgamated by VMware, we found you get official container support only in vSphere 6.5 Enterprise and Enterprise Plus licenses. It has possibilities for tight, layered security, but the control plane connections and constructions via Docker swarm and OpenStack are not well documented.

A management portal called Admiral is in beta, but we don’t test beta without strong signals that it’s nearly ready. There are other nagging constraints, we found in testing.

The VMware-Docker implementation is an interesting model, where vSphere is the container host, not Linux. Containers are technically deployed as VMs, but not in VMs. Each container is isolated from the host and other containers. Much TLS (with auto and CA-chained security authentication certificates) can protect communications, but network is somewhat limited at this time.

This means that vSphere is the infrastructure so that you can use its networks, its datastores, etc. You don’t need to run a separate Linux VM that will be the Docker host (however, you will want another host that has Docker tools installed in order to remotely run Docker commands on the virtual container hosts).

Integrated Containers uses a concept of a Virtual Container Host (VCH) accessed by an app binary that installs on Mac, Windows, and Linux. The Docker container pull (getting a workload from a Docker repository) lands on ESXi, and is contained by PhotonOS as an intermediary.

As an example, we created a VCH under Linux by:        

./vic-machine-linux create --target 10.0.100.243 --user 'Administrator@vsphere.local'

          --compute-resource extremeCluster -—bridge-network vic-bridge --image-store iSCSI

          —-volume-store iSCSI:container --no-tlsverify --force --name containers

          —-public-network-ip 10.0.100.112 --public-network-gateway 10.0.100.71/16

We had three networks available to communicate, private, management, and public, each with specific port groups (virtual plug jacks).

At this point in testing, we found a new and frustrating pet peeve, as we needed to have Port 2377/tcp outbound open (which is not in the built-in ESXi firewall ports) and this rule has to be manually added every time you reboot an ESXi host because manual firewall rulesets are not persistent for some unknown reason. Creating a rules set is supposedly supported (see for more information).

Containers are launched in a minimal PhotonOS VM. There are some caveats we found while running docker on newer machines. The Docker API version in the vSphere integration is 1.23 and most of the newer versions of Docker run version 1.24. This causes an error when trying to connect. We had to perform export DOCKER_API_VERSION=1.23 to enable a connection.

To run a ngnix container on Port 1080, we used the following command:

docker --tls -H $DOCKER_HOST run -d -p 1080:80 --name mynginx nginx

DOCKER_HOST is the IP address and port of the VCH (by default 2376)

There is also an “enterprise-class” container registry server called Harbor which can be an appliance deployed into your vSphere infrastructure. It can be used to store and distribute Docker images but with more of an enterprise mindset. It focuses on security, identity and management.

And if desired, you can just run instances of Red Hat, CentOS, Ubuntu, even Windows for Docker, along with control planes and security constructs used as before — inside these operating systems and away from VHC.

Overall

This update to 6.0 has some profound changes in it that many admins will enjoy. There’s even something for the experimenters who want to leverage an existing VMware infrastructure for Docker container rollouts, although there isn’t full functionality available just yet, especially in networking.

How we tested

We used HP ProLiant DL560 Gen8 and DL580 Gen9 and a Lenovo ThinkServer RD630 in a vSphere cluster. The two HP’s were upgraded to 6.5 and the Lenovo was a fresh install. We also tried to use an older HP DL585 G5 but we were unable upgrade it to 6.5 (because of unsupported devices). We used an older Dell (formerly Compellent) SAN for our iSCSI needs. For vCenter backup and recovery, we used an Ubuntu 16.04 VM with an ssh server to use with scp. For integrated containers, we used separate Linux VM /MacOS laptops to run remote Docker commands and create the VCHs on the cluster.