Article
Hypervisors and virtualization in a Cloud environment
Hypervisors, virtualization, and the cloud
On this page
Virtualization improves IT resource utilization by treating your company's physical resources as pools from which virtual resources can be dynamically allocated.
Virtualization involves a shift in thinking from physical to logical, treating IT resources as logical resources rather than separate physical resources. Using virtualization in your environment, you are able to consolidate resources such as processors, storage, and networks into a virtual environment which provides the following benefits:
- Consolidation to reduce hardware cost.
- Optimization of workloads.
- IT flexibility and responsiveness.
Virtualization is the creation of flexible substitutes for actual resources — substitutes that have the same functions and external interfaces as their actual counterparts but that differ in attributes such as size, performance, and cost. These substitutes are called virtual resources; their users are typically unaware of the substitution.
Virtualization is commonly applied to physical hardware resources by combining multiple physical resources into shared pools from which users receive virtual resources. With virtualization, you can make one physical resource look like multiple virtual resources.
Furthermore, virtual resources can have functions or features that are not available in their underlying physical resources.
System virtualization creates many virtual systems within a single physical system. Virtual systems are independent operating environments that use virtual resources. Virtual systems running on IBM systems are often referred to as logical partitions or virtual machines. System virtualization is most commonly implemented with hypervisor technology.
Hypervisors are software or firmware components that can virtualize system resources. The following figure shows how virtualization shifts thinking from physical to logical domains.
Now let's look at the types of hypervisors.
Types of hypervisors
There are two types of hypervisors:
- Type 1 hypervisor
- Type 2 hypervisor
Type 1 hypervisors run directly on the system hardware. Type 2 hypervisors run on a host operating system that provides virtualization services, such as I/O device support and memory management. The following figure shows how type 1 and type 2 hypervisors differ.
The hypervisors described in this series are supported by various hardware platforms and in various cloud environments:
- PowerVM: A feature of IBM POWER5, POWER6, and POWER7 servers, support provided for it on IBM i, AIX, and Linux.
- VMware ESX Server: A "bare metal" embedded hypervisor, VMware ESX's enterprise software hypervisors run directly on server hardware without requiring an additional underlying operating system.
- Xen: A virtual-machine monitor for IA-32, x86-64, Itanium, and ARM architectures, Xen allows several guest operating systems to execute on the same computer hardware concurrently. Xen systems have a structure with the Xen hypervisor as the lowest and most privileged layer.
- KVM: A virtualization infrastructure for the Linux kernel, KVM supports native virtualization on processors with hardware virtualization extensions. Originally, it supported x86 processors, but now supports a wide variety of processors and guest operating systems including many variations of Linux, BSD, Solaris, Windows, Haiku, ReactOS, and the AROS Research Operating System (there's even a modified version of qemu that can use KVM to run Mac OS X).
- z/VM: The current version of IBM's virtual machine operating systems, z/VM runs on IBM's zSeries and can be used to support large numbers (thousands) of Linux virtual machines.
All of these hypervisors are supported by IBM hardware.
The individual linked articles describe in detail the features, functionalities, and methods to deploy and manage the virtual systems with corresponding hypervisors.
Choosing the right hypervisor
One of the best ways to determine which hypervisor meets your needs is to compare their performance metrics. These include CPU overhead, amount of maximum host and guest memory, and support for virtual processors.
But metrics alone should not determine your choice. In addition to the capabilities of the hypervisor, you must also verify the guest operating systems that each hypervisor supports.
If you are running heterogeneous systems in your service network, then you must select the hypervisor that has support for the operating systems you currently run. If you run a homogeneous network based on Windows or Linux, then support for a smaller number of guest operating systems might fit your needs.
All hypervisors are not made equal, but they all offer similar features. Understanding the features they have as well as the guest operating systems each supports is an essential aspect of any hardware virtualization hypervisor selection process. Matching this data to your organization's requirements will be at the core of the decision you make.
The following factors should be examined before choosing a suitable hypervisor:
Virtual machine performance. Virtual systems should meet or exceed the performance of their physical counterparts, at least in relation to the applications within each server. Everything beyond meeting this benchmark is profit. Ideally, you want each hypervisor to optimize resources on the fly to maximize performance for each virtual machine. The question is how much you might be willing to pay for this optimization. The size or mission-criticality your project generally determines the value of this optimization.
Memory management. Look for support for hardware-assisted memory virtualization. Memory overcommit and large page table support in the VM guest and hypervisor are preferred features; memory page sharing is an optional bonus feature you might want to consider.
High availability. Each major vendor has its own high availability solution and the way each achieves it may be wildly different, ranging from very complex to minimalist approaches. Understanding both the disaster prevention and disaster recovery methods for each system is critical. You should never bring any virtual machine online without fully knowing the protection and recovery mechanisms in place.
Live migration. Live migration is extremely important for users; along with support for live migration across different platforms and the capability to simultaneously live migrate two or more VMs, you need to carefully consider what the individual hypervisor offers in this area.
Networking, storage, and security. In networking, hypervisors should support network interface cards (NICs) teaming and load balancing, Unicast isolation, and support for the standard (802.1Q) virtual local area network (VLAN) trunking. Each hypervisor should also support iSCSI- and Fibre Channel-networked storage and enterprise data protection software support with some preferences for tools and APIs, Fibre Channel over Ethernet (FCoE), and virtual disk multi-hypervisor compatibility.
Management features. Look for such management features as Simple Network Management Protocol (SNMP) trap capabilities, integration with other management software, and fault tolerance of the management server — these features are invaluable to a hypervisor.
Now I don't want to influence your choice of hypervisor (after all, your needs and requirements are unique), but here are a few general suggestions from my experience with implementation of hypervisors for cloud-based workloads:
- For UNIX-based workloads, business-critical applications comprised of heavy transactions where performance is the paramount requirement, the PowerVM hypervisor is capable of handling that sort of load.
- If you're running business-critical applications on System X (x86 servers for Windows and Linux), VMware ESX works quite well.
- If your applications aren't particularly business critical, you might try KVM or Xen (the startup costs for these is relatively inexpensive too).
You can even try out some of the freeware VMs like Xen and KVM.
PowerVM hypervisor
Power is virtualization without limits. Businesses are turning to PowerVM virtualization to consolidate multiple workloads onto fewer systems, increasing server utilization, and reducing cost. Power VM provides a secure and scalable virtualization environment for AIX, IBM i, and Linux applications built upon the advanced RAS features and leading performance of the Power Systems Platform.
Operating system versions supported:
- AIX 5.3, AIX 6.1 and AIX 7
- IBM i 6.1 and IBM i 7.1
- Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6 (when announced by Red Hat)
- SUSE Linux Enterprise Server 10 and SUSE Linux Enterprise Server 11
Hardware platforms supported:
- IBM Power Systems with POWER5, POWER6, and POWER7 processors
The following figure shows the architecture of PowerVM hypervisor:
Features
PowerVM Enterprise has two new industry-leading capabilities called Active Memory Sharing and Live Partition Mobility:
- Active Memory Sharing intelligently flows system memory from one partition to another as workload demands change.
- Live Partition Mobility allows for the movement of a running partition from one server to another with no application downtime, resulting in better system utilization, improved application availability, and energy savings. With Live Partition Mobility, planned application downtime due to regular server maintenance can be a thing of past.
Following are other features of PowerVM.
Micro-partitioning support: Micro-partitioning technology helps lower costs by allowing the system to be finely tuned to consolidate multiple independent workloads. Micro-partitions can be defined as small as 1/10th of a processor and be changed in increments as small as 1/100th of a processor. Up to 10 micro-partitions can be created per core.
Integrated Virtualization Manager: The Integrated Virtualization Manager (IVM) allows you to point, click, and consolidate workloads with an easy-to-use browser-based interface.
Virtual I/O Server: Let's you share I/O resources. The Virtual I/O Server is a special-purpose partition which provides virtual I/O resources to client partitions. The Virtual I/O Server owns the resources that are shared with clients. A physical adapter assigned to a partition can be shared by one or more other partitions. The Virtual I/O Server eliminates the need for dedicated network adapters, disk adapters, and disk drives.
PowerVM Lx86 support: You can run x86 Linux applications on POWER; this feature enables the dynamic execution of x86 Linux instructions by mapping them to instructions on a POWER-based system and caching the mapped instructions to optimize performance.
Shared dedicated capacity: Receive the benefits of dedicated resources without the waste. This feature allows the "donation" of spare CPU cycles for dedicated processor partitions to be used by the shared pool, thus increasing overall system performance. The dedicated partition maintains absolute priority for dedicated CPU cycles; sharing only occurs when the dedicated partition has not consumed all its resources. This feature is supported on POWER6 and POWER7 processor-based servers.
Multiple shared processor pools: With this feature, the system almost does the administration for you. You simply assign priorities to partitions and let the hypervisor allocate processing power as needed by your applications. This feature allows for automatic non-disruptive balancing of processing power between partitions assigned to shared pools, resulting in increased throughput and the potential to reduce processor-based software licensing costs.
N-port ID virtualization: NPIV provides direct access to Fibre Channel adapters from multiple client partitions, simplifying the management of Fibre Channel SAN environments. NPIV support is included with PowerVM Express, Standard, and Enterprise Edition and supports AIX V5.3, AIX V6.1, IBM i 6.1.1, and SUSE Linux Enterprise Server 11 partitions on all POWER6 and POWER7 processor-based servers, including blades.
Virtual tape: PowerVM has two virtualization methods for using tape devices on POWER6 and POWER7 processor-based servers, simplifying backup and restore operations. Both methods are supported with Power VM Express, Standard, or Enterprise Edition:
- NPIV enables PowerVM LPARs to access SAN tape libraries using shared physical HBA resources for AIX V5.3, AIX V6.1, and SUSE Linux Enterprise Server 11 partitions on POWER6 and POWER7 processor-based servers.
- Virtual tape support allows serial sharing of selected SAS tape devices for AIX V5.3, AIX V6.1, IBM I 6.1, and SUSE Linux Enterprise Server 11 partitions on POWER6 and POWER7 processor-based servers.
Live Partition Mobility: Move a running AIX or Linux partition from one physical Power Systems server to another without application downtime, helping clients to avoid application interruption for planned system maintenance, provisioning, and workload management. This feature is supported on POWER6 and POWER7 processor-based servers. It is also possible to move partitions from a POWER6 processor-based server to a POWER7 processor-based server to simplify upgrades to the newer platform.
PowerVM Live Partition Mobility is now supported in environments with two Hardware Management Consoles (HMCs) supporting larger and more flexible configurations. PowerVM partitions support both physical and virtual I/O enabling dynamic heterogeneous multiple path I/O. With this support, partitions can have paths to a storage device that includes both physical (such as dedicated FC adapters) and virtual (like with NPIV) adapters. Multiple path I/O is supported with Live Partition Mobility environments with AIX V5.3 and AIX V6.1 partitions on POWER6 and POWER7 processor-based servers.
Active memory sharing: Allowing for the more efficient utilization of system memory, the advanced memory sharing capability of PowerVM dynamically reallocates memory to running virtual partitions based on changing workload demands.
Deployment: Deploying your virtualization configuration includes the following tasks:
- Installing the Virtual I/O Server.
- Creating logical partitions and assigning virtual or physical resources to them.
- Installing operating systems in the logical partitions.
- Deploying Capacity on Demand.
The tools available to deploy virtualization configuration are as follows:
- Hardware Management Console (HMC): Import a system plan (created using SPT) to the HMC and the HMC can deploy that plan to the managed system. The HMC creates logical partitions based on the logical partition configuration specified in the system plan.
- Virtual I/O Server: The Virtual I/O Server is software that runs in its own logical partition and provides virtual I/O resources to client logical partitions on the managed system. The Virtual I/O Server lets one or more client logical partitions share physical adapters with attached disks or optical devices.
- Integrated Virtualization Manager: The Integrated Virtualization Manager is the user interface to the management partition (the Virtual I/O Server) on managed systems that are not managed by an HMC. You can use the Integrated Virtualization manager to create AIX and Linux client logical partitions on a single managed system. You can also configure virtual storage and virtual Ethernet on the managed system.
Deploying virtualization with the Hardware Management Console
You can create logical partitions, install operating systems, and deploy Capacity on Demand to a system that is managed by a Hardware Management Console (HMC).
To deploy virtualization configuration using the HMC, complete the following tasks:
- Optional: Enter the activation code for Virtualization Engine technologies.
- Optional: Create the Virtual I/O Server logical partition.
- Optional: Install the Virtual I/O Server.
- Create AIX and Linux logical partitions and assign resources to them.
- Install AIX and Linux in the logical partitions.
Deploying virtualization with the Integrated Virtualization Manager
You can create logical partitions and install operating systems on a system that is managed by the Integrated Virtualization Manager. To deploy virtualization configuration using the IVM, complete the following tasks:
- Enter the activation code for the Virtual I/O Server.
- Install the Virtual I/O Server.
- Prepare the Virtual I/O Server management partition.
- Create AIX and Linux logical partitions and assign resources to them.
- Install AIX and Linux in the logical partitions.
Managing your virtual machines
PowerVM manages virtual machines with the IVM: The IVM helps you:
- Simplify IT management by enabling computer resources to look and perform as one.
- Increase flexibility, allowing your organization to meet both anticipated and unanticipated spikes in server demand with shared capacity.
The IVM does not require the use of an HMC for managing LPARs on a single system. With IVM, clients can partition a single system by creating LPARs and provide for management of virtual storage and virtual Ethernet.
Choosing PowerVM
Consider the following pros and cons before deciding to use PowerVM as your virtualization tool.
On the pro side:
- PowerVM supports multiple operating environments on a single system.
- Enables up to 10 VMs per processor core.
- Processor, memory, and I/O resources can be dynamically moved between VMs.
- VMs can use dedicated or shared (capped or uncapped) processor resources.
- Processor resources can automatically move between VMs based on workload demands.
- Processor resources for a group of VMs can be capped, reducing software license costs.
- Storage resources for Power Systems servers and VIOS can be centralized in pools to optimize resource utilization.
- Simplifies VM creation and management for entry Power Systems servers and blades.
- Supports running many x86 Linux applications in a Linux on PowerVM.
- Live AIX and Linux VMs can be moved between servers, eliminating planned downtime.
- Intelligently flows memory from one VM to another for increased memory utilization.
- Simplifies the management and improves performance of Fibre Channel SAN environments.
On the con side:
- During high-demand periods, performance can suffer. PowerVM's Linux virtualization implementations have mechanics that allow for very granular resource management and control; during peak periods there is still the potential for performance degradation.
- With IBM PowerVM, you can virtualize 10 logical partitions (LPARs) to share one CPU or even one NIC; this practice can have a negative impact on performance (too much activity on too little hardware) and availability (consider the consequences of that one CPU failing). The flexibility and configurability of virtualization can lead to poorly designed systems that cause companies to abandon their entire virtualization strategy.
- Security: In the past, if a server was compromised, the vulnerability could be contained to that one server. With virtualization, every logical partition or virtual environment within the physical server has the potential to be compromised. While a systems administrator has the ability to make sure that the logical partitions within the physical box don't have access to one another, you should not overlook physical security as well.
- For example, while not required in many cases, most IBM System p shops use a dedicated Hardware Management Console (HMC) to perform their Linux logical partitioning and virtualization configuration. If the admin walks away from his desk and leaves the console open, an invader can gain access to every logical environment in the physical server.
VMware ESX Server hypervisor
ESX Server is a type 1 hypervisor that creates logical pools of system resources so that many virtual machines can share the same physical resources.
ESX Server is an operating system that functions like a hypervisor and runs directly on the system hardware. ESX Server inserts a virtualization layer between the system hardware and the virtual machines, turning the system hardware into a pool of logical computing resources that ESX Server can dynamically allocate to any operating system or application. The guest operating systems running in virtual machines interact with the virtual resources as if they were physical resources.
The following figure shows a system with ESX Server running virtual machines. ESX Server is running one virtual machine with the service console and three additional virtual machines. Each additional virtual machine is running an operating system and applications independent of the other virtual machines, while sharing the same physical resources.
Features
The key components of the ESX Server architecture are:
- ESX Server virtualization layer: Separates the underlying physical resources from the virtual machines.
- Resource manager: Creates virtual machines and delivers processing units, memory, network bandwidth, and disk bandwidth to them. It efficiently maps the physical resources to the virtual resources.
- Service console: Controls the installation, configuration, administration, troubleshooting, and maintenance of the ESX Server. The service console resides in its own virtual machine. ESX Server automatically configures the service console virtual machine when you install ESX Server. The service console also provides a place to install systems software such as Tivoli products and IBM Director.
- Hardware interface components, including device drivers: Delivers hardware-specific services while hiding hardware differences from other parts of the system.
ESX Server invokes the following advanced resource management controls to help you guarantee service levels:
- ESX Server uses a proportional share mechanism to allocate processors, memory, and disk resources when multiple virtual machines are contending for the same resource.
- ESX Server can allot processing capacity on a time-share basis to prevent any one virtual machine from monopolizing processor resources.
- ESX Server assigns memory based on virtual machine workloads and defined minimums. For example, if there is insufficient memory in a virtual machine, ESX Server can temporarily borrow memory from one virtual machine, lend it to another virtual machine, and restore it to the original virtual machine when needed.
- ESX Server controls network bandwidth with network traffic shaping. Network sharing is determined by token allocation or consumption based on the average or maximum bandwidth requirements for a virtual machine.
When coupled with VMware VirtualCenter, ESX Server provides the following additional capabilities:
- VMware VMotion: Migrates running virtual machines from one physical server to another with no impact to end users.
- VMware Distributed Resource Scheduler (DRS): Automatically allocates and balances computing resources within a resource pool based on defined business goals.
- VMware HA: Continuously monitors all physical servers in a resource pool and automatically restarts virtual machines affected by server failure on a different physical server within the same resource pool.
ESX Server 3.0 supports the following configurations:
- Host systems with up to 128 virtual machines, 64GB RAM, and up to 32 logical processors.
- Virtual machines located on network file systems and iSCSI adapters.
- Virtual machines with four virtual processors.
ESX Server supports Linux, Windows, FreeBSD (ESX Server 2.5 only), NetWare, and Solaris (ESX Server 3.0 only) guest operating systems.
Deploying virtualization
To deploy virtualization:
- Install ESX Server on the system.
- Create and configure virtual machines. IBM Tivoli Provisioning manager can be used for this activity.
- Install a guest operating system in each virtual machine.
Managing your virtual machines
The VMware vSphere client is used to manage virtual machines. With the vSphere client, you can open a console to the desktop of managed virtual machines. From the console, you can change operating system settings, use applications, browse the file system, monitor system performance, and so on, as if you were operating a physical system.
You can also use snapshots to capture the entire state of the virtual machine at the time you take the snapshot.
Connect the vSphere client directly to an ESX/ESXi host to work with only the virtual machines and the physical resources available on that host. Connect your vSphere client to a vCenter Server to manage virtual machines and pooled physical resources across multiple hosts.
Multiple vCenter Server systems can be joined together in a vCenter Server Connected Group to allow them to be managed with a single vSphere Client connection.
The following activities can be managed with the vSphere Client VM manager:
- Edit virtual machine startup and shutdown settings.
- Open a console to a virtual machine.
- Add and remove VMs.
- Use snapshots to manage VMs.
- Manage existing snapshots.
- Restore snapshots.
- Convert virtual disks from thin to thick.
- View existing hardware configuration and access the Add Hardware wizard to add or remove hardware.
- View and configure a number of virtual machine properties, such as power management interaction between the guest operating system and virtual machine, and VMware Tools settings.
- Configure CPUs, CPU hyper threading resources, memory, and disks.
Choosing VMware ESX Server
Consider the following pros and cons before you decide to use VMware ESX Server as your virtualization tool.
On the pro side:
- VMware ESX/ESXi 4.0 offers a minimal system-compact and small disk footprint size of 70MB.
- Infrastructure scalability supports 255GB RAM for virtual machines and up to 1TB RAM for large-scale server consolidation and disaster recovery projects; each VMware ESX/ESXi supports up to 256 powered-on virtual machines.
- The storage system adds and extends virtual disks non-disruptively to a running virtual machine to increase available resources. The vSphere client storage management provides customizable reports and topology maps.
- For high availability and disaster recovery, VMware ESX provides vStorage APIs for data protection, a backup proxy server that remove the load from VMware ESX/ESXi installations, and file-level full and incremental backups.
- VMware's high availability and fault tolerance features provide zero downtime, zero data loss, and continuous availability against physical server failures with VMware Fault Tolerance.
- VMware's vCenter Server provides a central point of control for virtualization management which is a scalable and extensible management server for administering infrastructure and application services with deep visibility into every aspect of virtual infrastructure. The vCenter Server supports event-based alarms, performance graphs, and one vCenter Server can manage up to 300 hosts and 3,000 virtual machines. Additionally, with the vCenter Server Linked Mode, you may manage up to 10,000 virtual machines from a single console.
On the con side:
- VMware requires more patches and updates than compared with Xen or KVM for instance.
- vSphere offers only file-level backup and recovery, no application-level awareness.
- VMware vCenter requires third-party database to keep the information storage and management of host system configurations.
- The VMware Distributed Resource Scheduler (DRS) feature could be more inclusive; it is based solely on CPU and memory utilization.
- There some security holes in VMware (for example, memory ballooning issues).
Xen hypervisor
Xen is a type 1 hypervisor that creates logical pools of system resources so that many virtual machines can share the same physical resources.
Xen is a hypervisor that runs directly on the system hardware. Xen inserts a virtualization layer between the system hardware and the virtual machines, turning the system hardware into a pool of logical computing resources that Xen can dynamically allocate to any guest operating system. The operating systems running in virtual machines interact with the virtual resources as if they were physical resources.
The following figure shows a system with Xen running virtual machines.
Xen is running three virtual machines. Each virtual machine is running a guest operating system and applications independent of other virtual machines while sharing the same physical resources.
Features
The following are key concepts of the Xen architecture:
- Full virtualization.
- Xen can run multiple guest OS, each in its on VM.
- Instead of a driver, lots of great stuff happens in the Xen daemon, xend.
Full virtualization
Most hypervisors are based on full virtualization which means that they completely emulate all hardware devices to the virtual machines. Guest operating systems do not require any modification and behave as if they each have exclusive access to the entire system.
Full virtualization often includes performance drawbacks because complete emulation usually demands more processing resources (and more overhead) from the hypervisor. Xen is based on paravirtualization; it requires that the guest operating systems be modified to support the Xen operating environment. However, the user space applications and libraries do not require modification.
Operating system modifications are necessary for reasons like:
- So that Xen can replace the operating system as the most privileged software.
- So that Xen can use more efficient interfaces (such as virtual block devices and virtual network interfaces) to emulate devices — this increases performance.
Xen can run multiple guest OS each in its on VM
Xen can run several guest operating systems each running in its own virtual machine or domain. When Xen is first installed, it automatically creates the first domain, Domain 0 (or dom0).
Domain 0 is the management domain and is responsible for managing the system. It performs tasks like building additional domains (or virtual machines), managing the virtual devices for each virtual machine, suspending virtual machines, resuming virtual machines, and migrating virtual machines. Domain 0 runs a guest operating system and is responsible for the hardware devices.
Instead of a driver, lots of great stuff happens in the Xen daemon
The Xen daemon, xend, is a Python program that runs in dom0. It is the central point of control for managing virtual resources across all the virtual machines running on the Xen hypervisor. Most of the command parsing, validation, and sequencing happens in user space in xend and not in a driver.
IBM supports the SUSE Linux Enterprise Edition (SLES) 10 version of Xen which supports the following configuration:
- Four virtual machines per processor and up to 64 virtual machines per physical system.
- SLES 10 guest operating systems (paravirtualized only).
Deploying virtualization
To deploy virtualization for Xen:
- Install Xen on the system.
- Create and configure virtual machines (this includes the guest operating system).
Install the Xen software using one of the following methods:
- Interactive install: Use this procedure to install directly on dedicated virtual machine on the Xen server. This dedicated virtual machine is referred to as the client computer in the install procedure.
- Install from CommCell console: Use this procedure to install remotely on a dedicated virtual machine on the Xen server.
See Related topics for more info on deploying viritualization.
Managing your virtual machines
There are several virtual machine managers available including:
- Open source mangers: OpenXenManager, an open source clone of Citrix's XenServer XenCenter and manages both XCP and Citrix's XenServer. Xen Cloud Control System (XCCS) is a lightweight front end package for the excellent Xen Cloud Platform cloud computing system. Zentific, a web-based management interface for the effective control of virtual machines running upon the Xen hypervisor.
- Commercial managers: Convirture: ConVirt is a centralized management solution that lets you provision, monitor, and manage the complete life cycle of your Xen deployment. Citrix XenCenter is a Windows-native graphical user interface for managing Citrix XenServer and XCP. Versiera is a web-based Internet technology designed to securely manage and monitor both cloud environments and enterprises with support for Linux, FreeBSD, OpenBSD, NetBSD, OS X, Windows, Solaris, OpenWRT, and DD-WRT.
Choosing Xen
On the pro side:
- The Xen server is built on the open source Xen hypervisor and uses a combination of paravirtualization and hardware-assisted virtualization. This collaboration between the OS and the virtualization platform enables the development of a simpler hypervisor that delivers highly optimized performance.
- Xen provides sophisticated workload balancing that captures CPU, memory, disk I/O, and network I/O data; it offers two optimization modes: one for performance and another for density.
- The Xen server takes advantage of a unique storage integration feature called the Citrix Storage Link. With it, the sysadmin can directly leverage features of arrays from such companies as HP, Dell Equal Logic, NetApp, EMC, and others.
- The Xen server includes multicore processor support, live migration, physical-server-to-virtual-machine conversion (P2V) and virtual-to-virtual conversion (V2V) tools, centralized multiserver management, real-time performance monitoring, and speedy performance for Windows and Linux.
On the con side:
- Xen has a relatively large footprint and relies on Linux in dom0.
- Xen relies on third-party solutions for hardware device drivers, storage, backup and recovery, and fault tolerance.
- Xen gets bogged down with anything with a high I/O rate or anything that sucks up resources and starves other VMs.
- Xen's integration can be problematic; it could become a burden on your Linux kernel over time.
- XenServer 5 is missing 802.1Q virtual local area network (VLAN) trunking; as for security, it doesn't offer directory services integration, role-based access controls, or security logging and auditing or administrative actions.
KVM hypervisor
The Kernel-based Virtual Machine (KVM) is a full native virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). Limited support for paravirtualization is also available for Linux and Windows guests in the form of a paravirtual network driver.
KVM is currently designed to interface with the kernel via a loadable kernel module. Operating system versions supported include a wide variety of guest operating systems like Linux, BSD, Solaris, Windows, Haiku, ReactOS, and AROS Research Operating System. A patched version of KVM (qemu) is able to run on Mac OS X.
Note: KVM does not perform any emulation itself; instead, a user-space program uses the /dev/kvm interface to set up a guest virtual server's address space, feed it simulated I/O, and map its video display back onto the host's display.
The following figure shows the KVM architecture.
Paravirtualization is a virtualization technique that presents a software interface to the virtual machines that is similar to but not identical to that of the underlying hardware. The intent of this modified interface is to reduce the portion of the guest operating system's execution time that is spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. There are specially defined "hooks" that allow the guest and host to request and acknowledge these difficult tasks that would otherwise be executed in the virtual domain, where execution performance is slower.
In the KVM architecture, the virtual machine is implemented as regular Linux process, scheduled by the standard Linux scheduler. In fact, each virtual CPU appears as a regular Linux process. This allows KVM to benefit from all the features of the Linux kernel.
Device emulation is handle by a modified version of qemu that provides an emulated BIOS, PCI bus, USB bus, and a standard set of devices such as IDE and SCSI disk controllers, network cards, etc.
Features
The following features are key to KVM.
Security
Since a virtual machine is implemented as a Linux process, it leverages the standard Linux security model to provide isolation and resource controls. The Linux kernel uses SELinux (Security-Enhanced Linux) to add mandatory access controls, multi-level and multi-category security, and to handle policy enforcement. SELinux provides strict resource isolation and confinement for processes running in the Linux kernel.
The SVirt project — a community effort attempting to integrate Mandatory Access Control (MAC) security and Linux-based virtualization (KVM) — builds on SELinux to provide an infrastructure to allow an administrator to define policies for virtual machine isolation. Out of the box, SVirt ensures that a virtual machines resources cannot be accessed by any other process (or virtual machine); this can be extended by the sysadmin to define fine-grained permissions; for example, to group virtual machines together to share resources.
Memory management
KVM inherits powerful memory management features from Linux. The memory of a virtual machine is stored the same as memory is for any other Linux process and can be swapped, backed by large pages for better performance, shared, or backed by a disk file. NUMA support (Non-Uniform Memory Access, memory design for multiprocessors) allows virtual machines to efficiently access large amounts of memory.
KVM supports the latest memory virtualization features from CPU vendors with support for Intel's Extended Page Table (EPT) and AMD's Rapid Virtualization Indexing (RVI) to deliver reduced CPU utilization and higher throughput.
Memory page sharing is supported through a kernel feature called Kernel Same-page Merging (KSM). KSM scans the memory of each virtual machine and where virtual machines have identical memory pages, KSM merges these into a single page that it shares between the virtual machines, storing only a single copy. If a guest attempts to change this shared page, it will be given its own private copy.
Storage
KVM is able to use any storage supported by Linux to store virtual machine images, including local disks with IDE, SCSI and SATA, Network Attached Storage (NAS) including NFS and SAMBA/CIFS, or SAN with support for iSCSI and Fibre Channel. Multipath I/O may be used to improve storage throughput and to provide redundancy.
Again, because KVM is part of the Linux kernel, it can leverage a proven and reliable storage infrastructure with support from all leading storage vendors; its storage stack has a proven record in production deployments.
KVM also supports virtual machine images on shared file systems such as the Global File System (GFS2) to allow virtual machine images to be shared between multiple hosts or shared using logical volumes. Disk images support thin provisioning allowing improved storage utilization by only allocating storage when it is required by the virtual machine rather than allocating the entire storage upfront. The native disk format for KVM is QCOW2 which includes support for snapshots allowing multiple levels of snapshots, compression, and encryption.
Live migration
KVM supports live migration which provides the ability to move a running virtual machine between physical hosts with no interruption to service. Live migration is transparent to the user, the virtual machine remains powered on, network connections remain active, and user applications continues to run while the virtual machine is relocated to a new physical host.
In addition to live migration, KVM supports saving a virtual machine's current state to disk to allow it to be stored and resumed at a later time.
Device drivers
KVM supports hybrid virtualization where paravirtualized drivers are installed in the guest operating system to allow virtual machines to use an optimized I/O interface rather than emulated devices to deliver high performance I/O for network and block devices.
The KVM hypervisor uses the VirtIO standard developed by IBM and Red Hat in conjunction with the Linux community for paravirtualized drivers; it is a hypervisor-independent interface for building device drivers allowing the same set of device drivers to be used for multiple hypervisors, allowing for better guest interoperability.
VirtIO drivers are included in modern Linux kernels (later than 2.6.25), included in Red Hat Enterprise Linux 4.8+ and 5.3+, and available for Red Hat Enterprise Linux 3. Red Hat had developed VirtIO drivers for Microsoft Windows guests for optimized network and disk I/O that have been certified under Microsoft's Windows Hardware Quality Labs certification program (WHQL).
Performance and scalability
KVM also inherits the performance and scalability of Linux, supporting virtual machines with up to 16 virtual CPUs and 256GB RAM and host systems with 256 cores and over 1TB RAM. It can deliver
- Up to 95 to 135 percent performance relative to bare metal for real-world enterprise workloads like SAP, Oracle, LAMP, and Microsoft Exchange.
- More than 1 million messages per second and sub-200-microsecond latency in virtual machines running on a standard server.
- The highest consolidation ratios with more than 600 virtual machines running enterprise workloads on a single server.
That means KVM allows even the most demanding application workloads to be virtualized.
Deploying virtualization
Deploying KVM is rather complex, full of individual configuration considerations, so for more information, please see the documentation.
Managing your virtual machines
There are several virtual machine managers available including:
- Univention Virtual Manager.
- qemu/KVM: You can run directly from the command line in a KVM machine.
- Virsh: A minimal shell for managing VMs.
- Virtual Machine Manager: Also known as virt-manager, a desktop user interface for managing VMs.
Choosing KVM
On the pro side:
- Although KVM is a relative newcomer to hypervisors, this lightweight module that comes with the mainline Linux kernel offers the simplicity of its implementation plus continued support of Linux heavyweights.
- KVM is flexible; since the guest operating systems are communicating to a hypervisor that is integrated into the Linux kernel, they can address hardware directly in all cases without the need to modify the virtualized operating system. This makes KVM a faster solution for virtual machines.
- Patches to the KVM are compatible with the Linux kernel. KVM is implemented in the Linux kernel itself; consequently, that makes it easier to control virtualization processes.
On the con side:
- There are no sophisticated tools for the management of the KVM server and VMs.
- KVM still needs to improve virtual network support, virtual storage support, enhanced security, high availability, fault tolerance, power management, HPC/real-time support, virtual CPU scalability, cross-vendor compatibility, VM portability, and build an established cloud services ecosystem.
zVM hypervisor
The z/VM hypervisor is designed to help extend the value of mainframe technology across the enterprise by integrating applications and data while providing exceptional levels of availability, security, and operational ease.
z/VM virtualization technology is designed to allow the capability to run hundreds to thousands of Linux servers on a single mainframe running with other System Z operating systems, such as z/OS, or as a large-scale Linux-only enterprise server solution.
z/VM V6.1 and z/VM V5.4 can also help to improve productivity by hosting non-Linux workloads such as z/OS, z/VSE, and z/TPF on the same System z server or as a large-scale enterprise-server solution.
z/VM supports Linux, z/OS, z/OS.e, Transaction Processing Facility (TPF), and z/VSE. z/VM also supports z/VM as a guest operating system.
Features
The z/VM base product includes the following components and facilities:
- Control Program (CP): CP is a hypervisor and real-machine resource manager.
- Conversational Monitor System (CMS): CMS provides a high-capacity application and interactive user environment and provides the z/VM file systems.
- TCP/IP for z/VM: TCP/IP for z/VM provides support for the TCP/IP networking environment.
- Advanced Program-to-Program Communication/Virtual Machine (APPC/VM) Virtual Telecommunications Access Method (VTAM) Support (AVS): AVS provides connectivity in an IBM Systems Network Architecture (SNA) network.
- Dump Viewing Facility: It is a tool for interactively diagnosing z/VM system problems.
- Group Control System (GCS): GCS is a virtual machine supervisor that provides multitasking services and supports a native SNA network.
- Hardware Configuration Definition (HCD) and Hardware Configuration Manager (HCM) for z/VM: HCD and HCM provide a comprehensive I/O configuration management environment.
- Language Environment for z/VM: Language Environment provides the runtime environment for z/VM application programs written in C/C++, COBOL, or PL/I.
- Open Systems Adapter/Support Facility (OSA/SF): OSA/SF is a tool for customizing the modes of operation of OSA hardware features.
- REXX/VM: REXX/VM contains the interpreter for processing the REXX programming language.
- Transparent Services Access Facility (TSAF): TSAF provides communication services within a collection of z/VM systems without using VTAM.
- Virtual Machine Serviceability Enhancements Staged/Extended (VMSES/E): VMSES/E provides a tools suite for installing and servicing z/VM and other enabled products.
z/VM also offers the following optional features:
- Data Facility Storage Management Subsystem for VM (DFSMS/VM): DFSMS/VM controls data and storage resources.
- Directory Maintenance Facility for z/VM (DirMaint): DirMaint provides interactive facilities for managing the z/VM user directory.
- Performance Toolkit for VM: Performance Toolkit provides tools for analyzing z/VM and Linux performance data.
- Resource Access Control Facility (RACF) Security Server for z/VM: RACF provides data security for an installation by controlling access to it.
- Remote Spooling Communications Subsystem (RSCS) Networking for z/VM: RSCS enables users to send messages, commands, files, and jobs to other users in a network.
Deploying virtualization
To deploy virtualization for z/VM:
- Create logical partitions.
- Install and configure z/VM in one or more logical partitions.
- Create virtual machines.
- Install and configure guest operating systems.
- Configure virtual networks for the virtual systems.
Managing your virtual machines
z/VM manages virtual machines through the IBM Systems Director, the platform-management foundation that enables integration with Tivoli, and third-party management platforms. With it, you can:
- Automate data center operations.
- Unify the management of IBM servers, storage, and network devices.
- Simplify the management of physical and virtual platform resources.
- Reduce operational complexity and provides a view of the relationships and health status of IT systems.
You can even get a single view of the actual energy usage throughout your data center.
Choosing z/VM
On the pro side:
- Ability to virtualize each LPAR into hundreds or more virtual machines.
- Ability to virtualize processor, memory, I/O, and networking resources.
- Dynamically configure processors, memory, I/O, and networking resources.
- Maximize resources to achieve high system utilization and advanced dynamic resource allocation.
- Experience advanced systems management, administration, and accounting tools.
On the con side:
- You will probably need highly skilled, mainframe-trained IT professionals to maintain.
In conclusion
IT managers are increasingly looking at virtualization technology to lower IT costs through increased efficiency, flexibility, and responsiveness. As virtualization becomes more pervasive, it is critical that virtualization infrastructure can address the challenges and issues faced by an enterprise datacenter in the most efficient manner.
Any virtualization infrastructure looking for mainstream adoption in data centers should offer the best-of-breed combination of several important enterprise readiness capabilities:
- Maturity
- Ease of deployment,
- Manageability and automation,
- Support and maintainability,
- Performance,
- Scalability,
- Reliability, availability, and serviceability
- Security
This article introduced the concept of system virtualization and hypervisors, demonstrated the role a hypervisor plays in system virtualization, and offered some topic areas to consider when choosing a hypervisor to support your cloud virtualization requirements.