A look at the lifecycle of the Virtual Infrastructure one customer has experienced. We'll start by touching on his intro to virtualization and VMware technologies and then follow the evolution of the VI at his current place of work.
VMware Workstation was discovered around the same time. It was used to work iteratively on software deployment scripting. Snapshots made life so much easier, plus virtualization made multitasking even multitasking-er. Eventually it was deployed into the student labs, allowing us more control over the experience and tailoring workloads per class or groups of classes
A number of server systems, especially older Sun SPARC Stations, were nearing the end of their usable life. The decision was made to replace them with Linux. At the same time we decided to try virtualization on server workloads in the data centre. VMware ESX 2.5 with VirtualCenter 1 was the result. It just worked.
The ESX2 cluster was very straight forward. It consisted simply of multiple hosts with shared FC backed storage. At the time we overlooked the need for a dedicated network segment for vMotion and management traffic, so we ran out and bought a consumer grade switch to get it going. Needless to say, the switch was replaced with an enterprise class device soon after.
I changed jobs shortly after this experience. From my personal perspective it was a leap forward, on to something new and interesting. From my virtualization experience perspective, I was about to take a step back in time.
One of the main reasons my new company brought me onboard was to help them deal with their VMware GSX systems. For those that don't know or remember GSX, it was basically a version of Workstation designed to run server VMs.
Due to the way GSX was designed, there weren't any clusters. Each system ran an OS, in this case Windows Server, on top of which GSX ran. In turn, GSX was host to a handful of VMs. Maintenance and performance were seriously constrained.
The company's existing virtualization investment meant that they were already comfortable with running VM workloads. Translating these needs away from the GSX systems meant creating two new VI3 clusters, Production and Non-Production. Workloads were distributed appropriately. A Virtual Desktop cluster was also provisioned as a pilot. Not VDI, mind you, just a bunch of Windows client VMs.
Once the migration had completed and GSX was no more, the company became quite comfortable with the virtualization infrastructure. There were small tweaks here and there of an operational nature, but no design improvements were made for several years.
In order to maintain some semblance of currency for the VMware components, it was decided to upgrade the hosts to the latest version of ESX. ESXi had been out for a while, however it was decided to avoid it for now in order to minimize effort. So ESX 4 was chosen for the hosts, and the latest version of vCenter for management, vCenter 5. It worked surprisingly well.
The design hadn't varied much since it was introduced to support VI3. The exception being a few clusters spun up to support a workload the business determined was too critical to risk sharing with others. Three guesses as to the type of workload that was.
Another year or two and the company's now ready to take on a major infrastructure refresh project, which I had been encouraging for a long time. As part of this project it was decided to rebuild the virtualization infrastructure and position it so that private cloud is a possible future path. Enter, VMware vCloud Suite 5.5.
The design still incorporates the Production, Non-Production and Virtual Dekstop clusters, however there are some new considerations. Modern requirements has resulted in a Management cluster and Sandbox (i.e. Lab) cluster being added. In addition we are supplementing vCenter with vCenter Operations Manager, vCenter Orchestrator, vCloud Automation Center, vCenter Configuratin Manager and vCenter Infrastructure Navigator.
In addition to revamping the existing datacenter, a new datacenter has been provisioned in another city for high availability and disaster recovery reasons. To support this, we've actually deployed two vSphere environments, with some of the services, such as vCenter Operations Manager being shared between them. We've also added vCenter Site Recovery Manager to handle DR. The new site will only handle specific production workloads, so non-production virtual infrastructure hasn't been provisioned. Supporting work such as network configuration has been done so that non-production infrastructure can be spun up if needed.
So what's next? The new environment is operational and the new tools are just starting to get tuned and adjusted to the environment. Once the migration to the new environment is wrapped up we'll be giving private cloud some serious consideration.