Virtualization of NVMe adapters on IBM POWER9 processor-based systems

Introduction to NVMe adapter

Now-a-days, data is extremely precious. How do we access the data and how fast we access data is much more important? Because, data is growing in exponentially every year and we should make sure that we have our technologies present to quickly access data.

Flash drives can be a solution as they provide sufficient bandwidth and speed. However, they are expensive in nature, and therefore, to effectively use the speed of flash drives, software industry has introduced a technology called Non-Volatile Memory Express (NVMe).

As we know that flash drives are faster, we can use them to cache frequently accessed data for a significant improvement in performance.

Starting from the first generation of IBM® POWER9™ processor-based systems, IBM has introduced an adapter which is built in collaboration with Seagate. This adapter has capability of an in-built flash drive that can be used for multiple purposes such as caching, booting logical partitions (LPARs) and Virtual I/O Server (VIOS) instances, and so on.

Scope of this article

This article details out on the usage of a Non-Volatile Memory Enterprise (NVMe) adapter on POWER9 systems. This article also provides use cases to explains how an NVMe adapter can be effectively used and also lists the benefits.

System requirements

These following system requirements are based on the testing performed by the IBM Integrated Systems Software Test team. The test configuration includes:

  • Firmware level 910 and later
  • VIOS version and later
  • First generation POWER9 system S914 (9009-41A)
  • System with up to four slots which can be shared between NVMe and SAS adapters
  • Multiple NVMe M.2 solid-state drive (SSD) cards
  • Hardware Management Console (HMC) 910 and later
  • NVMe adapters attached to the C50 port on the POWER9 system

Configuration of NVMe adapter

NVMe adapters are largely used by customers for various purposes. This section lists some of the use cases.

  • Assigning an NVMe Adapter to an IBM AIX® LPAR: In this method, we are assigning an NVMe adapter directly to an AIX LPAR and flash drives attached to AIX LPAR through an NVMe adapter.

  • Assigning an NVMe adapter to VIOS**:** In this method, we are assigning an NVMe adapter to VIOS and flash drives attached to VIOS through an NVMe adapter. We can assign cache disks from VIOS to the AIX client using the virtual Small Computer System Interface (vSCSI) technology.

  • Enabling AIX caching and Shared Storage Pool (SSP) caching using an NVMe adapter: In this approach, we are using an NVMe adapter for caching purpose. By enabling AIX caching we can cache the data on an AIX LPAR and by enabling SSP caching we can cache shared storage pools created on VIOS.

Figure 1 represents a typical use case of an NVMe adapter.

Figure 1. Typical use case of an NVMe adapter

The following steps explain how caching can be done.

  1. Create a cache pool on VIOS using the following command: Here, hdiks5 is the NVMe disk attached through the NVMe adapter.
    $ cache_mgt pool create ‑d hdisk5
    Pool cmpool0 created with devices hdisk5 
  2. Create a cache partition on the cache pool (created on VIOS) using the following command:
    $  cache_mgt partition create ‑s 4G ‑P cache_partition0
    Partition  cache_partition0 created in pool cmpool0
    $  cache_mgt partition create ‑s 4G ‑P cache_partition1
    Partition  cache_partition0 created in pool cmpool0
  3. Assign the cache partition to a vSCSI adapter present on VIOS using the following command:
    $ cache_mgt partition assign ‑P cache_partition0 ‑v vhost0
    Partition cache_partition0 assigned to vSCSI Host Adapter vhost0
    $ cache_mgt partition assign ‑P cache_partition1 ‑v vhost1
    Partition cache_partition1 assigned to vSCSI Host Adapter vhost1
  4. Run the cfgmgr command on the AIX client to get the cache disk on the client on VIOS.
  5. Assign the cache disk to the storage disk using the following command on LPAR1:
    #cache_mgt partition assign ‑P cachedisk0 ‑t hdisk3
    Partition cachedisk0 assigned to target hdisk3
    #cache_mgt partition assign ‑P cachedisk0 ‑t hdisk4
    Partition cachedisk0 assigned to target hdisk4

NVMe adapter use cases

This section describes various use cases of an NVMe adapter.

  • VIOS boot image on NVMe: Customers can use an NVMe device to install and boot a VIOS image.
  • Transferring VIOS boot images to an NVMe device: You can move existing boot images to an NVMe device using Logical Volume Manager (LVM) mirroring. Customers can add an NVMe mirror copy to rootvg and remove the old copy after sync is done.
  • Logical volume (LV) backed virtual SCSI device: Customers can install a NovaLink boot image on the device (LV backed device can be used to boot a NovaLink partition in greenfield deployment). A client LPAR can use the LV backed device [sitting on an NVMe volume group (VG)] to host the read cache.
  • Read cache device on VIOS: An NVMe device is perfect for the local read cache on VIOS. It can be used for SSP disk caching where data present in the shared storage pool will be cached on the NVMe disk.
  • LPM with SSP where NVMe devices are used for caching: In this case, we can migrate an LPAR which is having storage assigned from VIOS using SSP technology. The storage is involved in caching and caching is enabled using NVMe disk.
  • No dependency on type of disk for client: Create a volume group with some NVMe and some other type of devices. Create an LV that can spread across NVMe and other devices. But in client, this LV appears as a normal vSCSI disk even though the LV is spread between the NVMe disk and the other disk.
  • Backup and restore of VIOS configuration: We can create back up of VIOS instances with NVMe disks, install new VIOS build, and restore the configuration on the new VIOS build.
  • No limitation on SSP operations: When we have enabled SSP caching using an NVMe disk, we can perform any kind of SSP operations such as adding/removing/replacing disk to or from SSP, creating/modifying/deleting tier in SSP, and creating/deleting mirror in SSP.
  • Upgrade support from previous VIOS levels: We can upgrade VIOS instances from an older level to a new level and start using the NVMe device at the new level directly.


NVMe adapters have the following limitations:

  • VIOS does not allow mapping an NVMe device as a physical volume (PV) backed device.
  • An NVMe disk cannot be supported as an Active Memory sharing (AMS) device.
  • VIOS does not use an NVMe device as an SSP disk as it is locally attached.
  • VIOS does not support rules for the adapter attribute. As of now, customers will not be able to add, delete, or update rules for these attributes. Enhancements to VIOS rules is considered for future releases.


This article helped users to understand the requirements for configuring NVMe adapters in first generation POWER9 processor-based systems. It also outlines the proper use cases of using NVMe adapters and how it can improve performance using flash drive technology.