Optimizing the Virtual Environment

By Bob Carlson, EVP, Criterion HPS

New Tools that Help Industry Dance to Your Enterprise Tune

According to a recent industry survey, more than 46% of CIOs reported that 51% to 85% of their data-center servers have been virtualized. By the end of 2012, Gartner is predicting that about 50% of the x86 architecture workloads—representing approximately 58 million deployed machines—will be running in virtual machines. IDC analyst Cindy Borovick believes that 2010 was the first year in which the number of deployed virtual servers outnumbered deployed physical servers.

Clearly, the virtualization megatrend is delivering real, tangible cost savings and operational improvement. Yet CIOs are left with a number of questions: How do the major technology trends impact my virtualization strategy? How do I get my conversation with industry aligned to my objectives? What is the technical solution that delivers the lowest possible cost of the virtualized operation for my applications?

In the early stages of the server-virtualization journey, there were significant technology differences between the various hypervisor solutions. They ranged from pure performance differences to the available administrative and management tools. Over time, the marketplace has eliminated these material differences. As a result, CIOs can focus on other technology advances, which will significantly impact current architectures and reduce costs. Three major trends and potential solutions are fundamentally impacting the virtualization discussion:

  • Emerging industry standards delivering hypervisor interoperability
  • Dense computing delivered by the new Intel® processor families
  • The quiet migration to 64-bit architectures
  • Emergence of new solutions that allow enterprises to identify the best solution for their environments

Hypervisor Interoperability
In November 2009, the Open Virtualization Format (OVF) specification was published by IBM, VMware, Microsoft, Citrix, HP, Red Hat, and other industry leaders. This specification defines a standards-based, portable format so that enterprises can deploy the virtual machine (VM) in any hypervisor that supports OVF. It therefore creates a platform-independent, efficient, extensible, and open packaging and distribution format for VMs.

Although the support for the full scope of this specification hasn’t yet been fully implemented by the leading hypervisor vendors, I believe VM packaging will be independent of the hypervisor in the near future. CIOs will then be able to move application workloads frictionlessly to the lowest-cost processing environment.

Dense Computing
With the emergence of the new Intel® microarchitecture (codenamed Nehalem), the industry has begun to build and deliver very dense servers. Over this period, the number of vCores per motherboard has increased from four in early 2008 to 64 vCores (and more) per motherboard. This dramatic increase in processing density has coincided with a dramatic reduction in power requirements, which allows these very dense servers to be air-cooled. This technology has the potential to achieve considerable infrastructure consolidation and cost reductions.

64-bit Architectures
The adoption and migration to the 64-bit architecture has been one of the most significant migrations that have never been discussed. This migration is being led by the tremendous success of Windows 7 and the availability of multicore processing chips. If you spend any time on the Internet searching on the benefits of 64-bit architectures, the preponderance of comments focuses on the ability to address and use more memory. While this is indeed an important benefit, I don’t believe it is the most important advantage for CIOs. Arguably, the most critical reason to move to 64-bit architectures is to ensure that the application portfolio takes full advantage of the processing and cost benefits of the new, dense-server solutions. The result of this server density is significantly reduced costs due to server rack consolidation. More importantly, the physical architecture’s complexity is tremendously reduced. Eliminating this complexity has a multiplying effect on cost reductions, as every component of the system life cycle becomes easier to manage.

Individualized Toolsets
With all of the technology breakthroughs and new capabilities being delivered by industry, CIOs need new tools and methods to regain control of their decision-making processes. Like all new technologies, “it depends” what the specific benefits will be in each individual environment. New, individualized toolsets are emerging to help guide CIOs as they converse with industry. These toolsets also complement the generic solutions resulting from hypervisor interoperability, dense computing, and 64-bit architectures.

One example is VCO (pronounced V-COE), which helps the CIO communicate his or her requirements to industry. He or she also can evaluate and compare industry’s recommendations in a straightforward and meaningful manner. This management tool allows alternative solutions to be evaluated in order to help the CIO select the solution or solutions that provide the lowest possible total cost of ownership future. It creates the following:

  • A new framework for comparing the cost of ownership for different virtualization stacks
  • A metric that represents the total cost to operate a single VM for the specific application
  • Separate baselines for each application, ensuring that unique operational requirements are included

The VCO is based on the insight that the VM file contains all of the business and operational value. Yet the infrastructure required to run and operate the VM file has become a pure commodity. Once the VM is created, the historical dependence between the hardware, operating system, and application has been permanently severed. With the emergence of the OVF, VMs can simply be deployed to any available virtualized infrastructure. In essence, virtualization turns the underlying infrastructure into a pure commodity. It’s therefore reasonable for the CIO to demand that this infrastructure deliver the lowest possible total cost of ownership to the organization.

31a

This model establishes the “cost per VM baseline,” which is used to compare and contrast new technologies that can deliver superior and ongoing cost savings. The framework is open and extensible to meet future requirements. To evaluate alternative VM infrastructures, it begins by determining the VCO for the target application (see the Figure). In modern data-center operation, the inputs required for the tool should all be available. Once the application’s VCO is determined, the CIO can provide it to industry and benefit from the open competition.

Now, the virtualization conversation between the CIO and industry can be aligned with the CIO’s objectives. The newest and greatest innovations can be assessed in the context of how they reduce the total cost of operation for an application stack.

By adopting these types of tools, the CIO will reward marketplace innovations that continually reduce the cost of running and operating a VM. For example, moving the emphasis to total system optimization and away from any individual-solution component will result in the lowest total cost of operation for the application.

The VCO drives the optimization of these tradeoffs, resulting in the lowest operational cost per VM for the organization. Understanding these tradeoffs and the impact on the cost of running and operating a single VM provides the CIO with new, critical insights into the operation. It also puts him or her back in control of understanding how industry innovation can reduce ongoing operational costs.

Together, hypervisor interoperability, dense computing, and the migration to 64-bit architectures have defined the server-virtualization journey. They also have allowed CIOs to focus on how these advances are impacting current architectures and reducing costs. The resulting solutions have propelled virtualization to the forefront of the industry and made the case for distinct benefits. As CIOs wade through the solutions advanced by these trends, however, the search continues for tools that help to identify the best virtualization strategy for their unique enterprise environments. These tools also should get industry to respond to their specific requirements.

1 Bernd Harzog, “Gartner Projects Server Virtualization to Grow from 16% to 50% of Workloads by 2012,” The Virtualization Practice,October 21, 2009. (http://www.virtualizationpractice.com/blog/?p=2496)
2 Sean Michael Kerner, “Virtual Servers Top Physical Ones, WAN Optimization Soars: IDC,” Datamation, April 28, 2010. (http://itmanagement. earthweb.com/datbus/article.php/3879246/Virtual-Servers-Top-Physical- Ones-WAN-Optimization-Soars-IDC.htm)
3 Copyright © 2009 Distributed Management Task Force, Inc. (DMTF). All rights reserved. Document Number: DSP0243 Date: 2009-02-22

Mr. Carlson’s 26 year career has paralleled the growth of the information technology industry from back office automation to strategic business process enabler. Both as an IBM executive and as the CEO of 2 start-up companies, Mr. Carlson has specialized in developing and implementing leading-edge business solutions that provide competitive advantage through the systematic exploitation of technology. He is now focused on the rapidly growing optimized solution server market and leading business development and product initiatives for Criterion HPS.