Single Root Input / Output Virtualization (SR-IOV) is a specification to allow virtual machines to read and write directly to and from hardware. The purpose is to speed up these reads and writes by skipping intermediate processing that normally happens in the hypervisor. In practice, this allows us to approach wire speeds from a virtual machine. SR-IOV can be used with storage, networking, and other types of I/O, but for the purposes of this article we’re only going to talk about networking.
PowerVC 1.3.2 (released December 2016) first added support for SR-IOV networking for NovaLink hosts. Starting with PowerVC 1.4.2, support is available for SR-IOV networking on HMC managed hosts as well.
Restrictions and Limitations
Only POWER8 and later hardware is supported, P7 and earlier is not supported with SR-IOV networking.
- A PowerVM hypervisor technology called vNIC must be used. Direct attachment of SR-IOV adapters, ports, and logical ports to VMs is not supported by PowerVC.
- Restrictions on AIX, IBM i, and especially Linux levels can be found here: vNIC Frequently Asked Questions
- This function requires SR-IOV capable adapters. Regular Ethernet hardware is not sufficient.
Supported Features and Functions
- The PowerVC UI can be used to deploy virtual machines that use SR-IOV networking.
- SR-IOV networks can be attached to existing virtual machines.
- Virtual machines using SR-IOV networking can use Live Partition Migration (LPM) to move between POWER hosts.
- Remote restart can be used to evacuate virtual machines using SR-IOV from failed hosts.
- Existing virtual machines that use SR-IOV with vNIC can be brought under PowerVC management to use the functions described above.
- PowerVC REST APIs have been extended to support SR-IOV for use by third party scripting and development.
PowerVC supports SR-IOV networking using a hypervisor technology called vNIC. This is not to be confused with regular virtual Ethernet, and is instead an alternative to Shared Ethernet Adapters. The hypervisor vNIC technology provides a client / server model within a compute host and enables us to migrate virtual machines (via LPM) from one compute host to another while still using direct-memory-access technologies like SR-IOV. PowerVC has built our SR-IOV networking solution around vNIC because LPM is such a core capability of PowerVC and many functions like Dynamic Resource Optimizer (DRO) and host maintenance mode depend on it.
For more information about PowerVM vNIC, see vNIC Frequently Asked Questions
Physical Network Names
The data center infrastructure you are managing with PowerVC might have multiple independent physical Ethernet segments that are not connected to each other. For example, it is common to have separate “management” and “data” networks, and it is also common to have a separate network for 1Gb and a high speed 10Gb or 40Gb network.
PowerVC allows you to assign physical network names to each PowerVC network. This physical network name is used to ensure that virtual machines remain on the same physical network as the virtual machines are deployed and then migrated between compute hosts. For SR-IOV, the physical network names will correspond to the PowerVM “port label” assigned to each physical Ethernet port to determine which ports are connected to which physical network segments.
If your data center has only one physical network, PowerVC will use the physical network name “default” and you don’t need to worry about these designations.
Using The Function
When a host with SR-IOV networking adapters is managed by PowerVC, you can see the physical SR-IOV ports on that host’s details page. You can also configure the physical network names that each port is using on this panel:
When creating a network, an SR-IOV section will be available that allows selection of the physical network segment that this PowerVC network should be associated with, and will show the physical SR-IOV Ethernet ports associated with that network. For example, if we had separate physical networks for “production” and “test” and we wanted to create a PowerVC network for the “test” segment, we might see this when creating the PowerVC network:
So we can see that any virtual machines using this network that get deployed or migrated to the host named “P8_5” will route their traffic through those two Ethernet ports.
Finally, when we deploy a virtual machine and select this network, we’ll have the option to configure redundancy and a minimum bandwidth capacity:
The redundancy model is implemented in the vNIC layer and uses an active / passive configuration with anti-affinity between physical ports, physical adapters, and VIOS. In other words, redundancy on this system would route all the traffic through one of the physical Ethernet ports displayed above (whichever port is least utilized) but if that port fails due to a transceiver issue or cable break, traffic would automatically move to the other port.
The minimum capacity can be used to ensure that the workload is guaranteed a percentage of the port’s bandwidth. The work load can use more than that bandwidth if it’s available, and other workloads can use the bandwidth if it’s available, but this will ensure a work load always has at least a certain percentage of the port’s bandwidth available if it is needed.
For more details about using PowerVC with SR-IOV, refer to the documentation in the Knowledge Center, and look for updates on the HMC support when PowerVC 1.4.2 is released: SR-IOV backed networks.
Note: to view a different version of a topic in the Knowledge Center, click “Change version” at the top of the page.