Dive into the KVM hypervisor
Hypervisors, virtualization, and the cloud
What to know to start
The Kernel-based Virtual Machine (KVM) is a full native virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). Limited support for paravirtualization is also available for Linux and Windows guests in the form of a paravirtual network driver.
KVM is currently designed to interface with the kernel via a loadable kernel module. Operating system versions supported include a wide variety of guest operating systems like Linux, BSD, Solaris, Windows, Haiku, ReactOS, and AROS Research Operating System. A patched version of KVM (qemu) is able to run on Mac OS X.
Note: KVM does not perform any emulation itself; instead, a user-space program uses the /dev/kvm interface to set up a guest virtual server’s address space, feed it simulated I/O, and map its video display back onto the host’s display.
Figure 1 shows the KVM architecture.
Figure 1. The KVM architecture
In the KVM architecture, the virtual machine is implemented as regular Linux process, scheduled by the standard Linux scheduler. In fact, each virtual CPU appears as a regular Linux process. This allows KVM to benefit from all the features of the Linux kernel.
Device emulation is handle by a modified version of qemu that provides an emulated BIOS, PCI bus, USB bus, and a standard set of devices such as IDE and SCSI disk controllers, network cards, etc.
The following features are key to KVM.
Since a virtual machine is implemented as a Linux process, it leverages the standard Linux security model to provide isolation and resource controls. The Linux kernel uses SELinux (Security-Enhanced Linux) to add mandatory access controls, multi-level and multi-category security, and to handle policy enforcement. SELinux provides strict resource isolation and confinement for processes running in the Linux kernel.
The SVirt project — a community effort attempting to integrate Mandatory Access Control (MAC) security and Linux-based virtualization (KVM) — builds on SELinux to provide an infrastructure to allow an administrator to define policies for virtual machine isolation. Out of the box, SVirt ensures that a virtual machines resources cannot be accessed by any other process (or virtual machine); this can be extended by the sysadmin to define fine-grained permissions; for example, to group virtual machines together to share resources.
KVM inherits powerful memory management features from Linux. The memory of a virtual machine is stored the same as memory is for any other Linux process and can be swapped, backed by large pages for better performance, shared, or backed by a disk file. NUMA support (Non-Uniform Memory Access, memory design for multiprocessors) allows virtual machines to efficiently access large amounts of memory.
KVM supports the latest memory virtualization features from CPU vendors with support for Intel’s Extended Page Table (EPT) and AMD’s Rapid Virtualization Indexing (RVI) to deliver reduced CPU utilization and higher throughput.
Memory page sharing is supported through a kernel feature called Kernel Same-page Merging (KSM). KSM scans the memory of each virtual machine and where virtual machines have identical memory pages, KSM merges these into a single page that it shares between the virtual machines, storing only a single copy. If a guest attempts to change this shared page, it will be given its own private copy.
KVM is able to use any storage supported by Linux to store virtual machine images, including local disks with IDE, SCSI and SATA, Network Attached Storage (NAS) including NFS and SAMBA/CIFS, or SAN with support for iSCSI and Fibre Channel. Multipath I/O may be used to improve storage throughput and to provide redundancy.
Again, because KVM is part of the Linux kernel, it can leverage a proven and reliable storage infrastructure with support from all leading storage vendors; its storage stack has a proven record in production deployments.
KVM also supports virtual machine images on shared file systems such as the Global File System (GFS2) to allow virtual machine images to be shared between multiple hosts or shared using logical volumes. Disk images support thin provisioning allowing improved storage utilization by only allocating storage when it is required by the virtual machine rather than allocating the entire storage upfront. The native disk format for KVM is QCOW2 which includes support for snapshots allowing multiple levels of snapshots, compression, and encryption.
KVM supports live migration which provides the ability to move a running virtual machine between physical hosts with no interruption to service. Live migration is transparent to the user, the virtual machine remains powered on, network connections remain active, and user applications continues to run while the virtual machine is relocated to a new physical host.
In addition to live migration, KVM supports saving a virtual machine’s current state to disk to allow it to be stored and resumed at a later time.
KVM supports hybrid virtualization where paravirtualized drivers are installed in the guest operating system to allow virtual machines to use an optimized I/O interface rather than emulated devices to deliver high performance I/O for network and block devices.
The KVM hypervisor uses the VirtIO standard developed by IBM and Red Hat in conjunction with the Linux community for paravirtualized drivers; it is a hypervisor-independent interface for building device drivers allowing the same set of device drivers to be used for multiple hypervisors, allowing for better guest interoperability.
VirtIO drivers are included in modern Linux kernels (later than 2.6.25), included in Red Hat Enterprise Linux 4.8+ and 5.3+, and available for Red Hat Enterprise Linux 3. Red Hat had developed VirtIO drivers for Microsoft Windows guests for optimized network and disk I/O that have been certified under Microsoft’s Windows Hardware Quality Labs certification program (WHQL).
Performance and scalability
KVM also inherits the performance and scalability of Linux, supporting virtual machines with up to 16 virtual CPUs and 256GB RAM and host systems with 256 cores and over 1TB RAM. It can deliver
- Up to 95 to 135 percent performance relative to bare metal for real-world enterprise workloads like SAP, Oracle, LAMP, and Microsoft Exchange.
- More than 1 million messages per second and sub-200-microsecond latency in virtual machines running on a standard server.
- The highest consolidation ratios with more than 600 virtual machines running enterprise workloads on a single server.
That means KVM allows even the most demanding application workloads to be virtualized.
Deploying KVM is rather complex, full of individual configuration considerations, so for more information, please see Related topics.
Managing your virtual machines
There are several virtual machine managers available including:
- Univention Virtual Manager.
- qemu/KVM: You can run directly from the command line in a KVM machine.
- Virsh: A minimal shell for managing VMs.
- Virtual Machine Manager: Also known as virt-manager, a desktop user interface for managing VMs.
On the pro side:
- Although KVM is a relative newcomer to hypervisors, this lightweight module that comes with the mainline Linux kernel offers the simplicity of its implementation plus continued support of Linux heavyweights.
- KVM is flexible; since the guest operating systems are communicating to a hypervisor that is integrated into the Linux kernel, they can address hardware directly in all cases without the need to modify the virtualized operating system. This makes KVM a faster solution for virtual machines.
- Patches to the KVM are compatible with the Linux kernel. KVM is implemented in the Linux kernel itself; consequently, that makes it easier to control virtualization processes.
On the con side:
- There are no sophisticated tools for the management of the KVM server and VMs.
- KVM still needs to improve virtual network support, virtual storage support, enhanced security, high availability, fault tolerance, power management, HPC/real-time support, virtual CPU scalability, cross-vendor compatibility, VM portability, and build an established cloud services ecosystem.