IBM and Red Hat — the next chapter of open innovation. Learn more ›
by Christian Pruett | Published July 20, 2010
In the early days of IBM AIX®, systems administrators were limited to one logical server per physical server. If you wanted to grow your computing environment, you had to purchase a new IBM RS/6000® or pSeries® server. Or, conversely, if you had some free resources you wanted to share to another server, there was no easy way of doing it short of physically moving hardware components.
In 2001, IBM introduced logical partitioning (LPAR) technology, which allowed multiple logical servers to use the same physical server’s resources, including processor, memory, disk, and adapters, managed by a special administrative server called the Hardware Management Console (HMC). This technology let systems administrators create, modify and remove LPARs, manage resources, and do operating system work on disparate logical AIX and Linux® servers within a single physical server.
With several more iterations of LPAR technology, it became possible to dynamically manage resources and micropartition processor resources, letting multiple LPARs share even the same physical processor simultaneously. Then, IBM introduced virtual I/O (VIO) technology, which allows the same LPARs to use the same storage and network resources, thereby breaking the barrier of disk and adapter isolation.
VIO technologies consist of servers, software, and various commands.
VIO technology is based on the idea of having special LPARs that manage the disk and network resources that other LPARs use on the same pSeries or IBM System p servers. Instead of the individual network and disk resources being carved out on an LPAR-by-LPAR basis (especially in cases where there wouldn’t be sufficient resources for all the LPARs to possess what they require to function), one or two VIO servers are given control of these resources and share them out to the other LPARs (VIO client LPARs).
This VIO server runs a special version of the AIX operating system, with an additional software package called IOS. This software comes bundled and is managed independent of the usual operating system commands (for example, installp) and versioning structure (technology levels). It is similar to how HMCs have specialized software loaded onto a Linux kernel for a specific purpose.
Note: Installing third-party software or altering the operating system through any means outside of the IOS typically invalidates support from IBM, so it’s best to avoid modifying the server in any nonstandard, non-VIO way.
Instead of using the root user ID to issue commands, an administrative user ID—padmin—is used for all of the VIO controls. This user ID has a shell called the IOS command-line interface (ioscli) that runs a unique set of commands for managing devices assigned to the VIO servers. Many of these commands are similar to regular AIX commands (for example, lsdev) but use different flags and command structures (for example, lsdev –dev). But, most of the superuser level commands are new and perform many different operations at once. In fact, when administered properly, systems administrators will rarely have to become root.
Helpful padmin commands include:
su – root
As you begin to plan your VIO environment, a successful, fully functioning and highly available VIO environment is directly proportional to the amount of time you invest in checking your hardware, designing some handy spreadsheets, and focusing on the details.
The first area you have to address is how to manage your disk resources to your VIO client LPARs. Within VIO, you use three main methods to serve up disk resources:
The second area you have to plan out is how to share out your network resources to the VIO client LPARs. Similar to the disk resources, there are two main ways of setting things up:
1 Physical Ethernet Adapter + 1 Virtual Ethernet Adapter = 1 Shared Ethernet Adapter
The third area you need to plan is to have redundant VIO servers on the same physical pSeries or System P server. If a single VIO server supports a dozen VIO client LPARs and something catastrophic knocks that server offline, everything on top of it will come crashing down.
By having two VIO servers with the same set of resources, VIO client LPARs can continue functioning unimpaired if something takes one of the VIO servers down. The VIO client LPARs will go to the other VIO server for their disk and network resources. Mapping disks to both VIO servers and creating network control channels gives VIO client LPARs two legs to stand on. It also makes it possible to perform IOS upgrades on the VIO servers dynamically without affecting the VIO client LPARs.
The resources that you assign to each VIO server should be as close to identical as possible and designed to maximize availability. Do not mix a slower speed Ethernet adapter on one VIO server with a faster speed one on another. Do not put all of the FC adapters used by both VIO servers in the same physical drawer. Instead, stagger the adapters between multiple drawers and assign them independently. Plan out every possible hardware failure and look for ways to maximize redundancy.
In addition, it is especially important to document how everything is mapped out. Record your environment in a spreadsheet and cross-reference it often with the output of commands like vfcmap. Figure 1 provides an example of a simple sheet that details a System p server with two VIO servers and four VIO client LPARs using a mix of SEA, IVE, virtual SCSI, and virtual FC.
Now that you’ve determined what you need for your environment, the following procedure guides you through building a VIO server. This procedure assumes that you are familiar with the HMC and SMS along with their menu systems.
lsdev –dev entX –attr
mkvdev ‑sea $PHYS ‑vadapter $VIRT ‑default $VIRT ‑defaultid $ID
‑attr ha_mode=auto ctl_chan=$CTRL
Now that your VIO servers are up, the following procedure guides you through building a VIO client LPAR:
lsdev –dev hdiskX –attr
mkvdev ‑vdev hdiskX ‑vadapter $VHOST ‑dev $VTD
lsmap –all –npiv
vfcmap ‑vadapter vfchostX ‑fcp fcsX
From this point, the installation follows a standard AIX server installation.
VIO technology can help you use resources more effectively, cut down on hardware costs, and consolidate servers in new and powerful ways. This article provided the background and basics on how to make a simple VIO environment work, but the best way to fully understand all the concepts is to put them into practice and set up and configure some servers on your own. The one piece of advice I leave you with is to plan, document, and test everything before putting it into production. It will be worth it.
Get the Code »
Learn how to build and deploy a model using PowerAI Vision and then integrate it into an iOS application.
Artificial intelligenceData science+
Artificial intelligenceIBM PowerAI+
Back to top