vHMC and PowerVC installation on the FSM node

The FSM is now nearing its end of life. For Power nodes in a flex chassis the obvious way out is to use a regular HMC to manage the nodes directly. FSM VMControl virtualization features used can be performed by PowerVC, which needs a RedHat Enterprise 7.1 LPAR or VM. Instead of buying new systems to host these servers, why not re-purpose the FSM node and host HMC and PowerVC on this node?

The main reason not to do this is support. The FSM is an appliance with specific hardware to connect to the management network. Firmware updates will become an issue, as these are only provided from within the FSM software. So production use of a re-purposed FSM is definitely not recommended.

In this tutorial we will install and configure the FSM with RedHat Linux, create two VMs, and install them with PowerVC and HMC.

Consult the HMC 8.8.4 README for specific support issues: http://delivery04.dhe.ibm.com/sar/CMA/HMA/05sgg/4/MH01559.readme.html

Using PowerVC 1.3 to manage your environment requires many things to be set up correctly. Look them up here: https://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.3.0/com.ibm.powervc.standard.help.doc/powervc_hwandsw_reqs_hmc.html

FSM Details

The FSM node has the following hardware on-board:

  • One Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W

  • 32 GB of memory with eight 4 GB PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMMs

  • Integrated LSI SAS2004 RAID controller

  • Two IBM 200 GB SATA 1.8″ MLC SSD configured in a RAID 1

  • One IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5″ SFF HS HDD

  • Dual-port 10 Gb Ethernet Emulex BladeEngine 3 (BE3) controller for data network connections

  • Dual-port Broadcom 5718 controller for internal chassis management network connections

  • Integrated Management Module II (IMM2)

See: http://www.redbooks.ibm.com/abstracts/tips0862.html

Steps to perform before re-installing the FSM

  1. Disable Central User management on the FSM if used

  2. Validate access to the CMM and note the firmware level

  3. Update the firmware of all Power Nodes to at least 773.12 (AF773_056)

  4. Unmanage the chassis on the FSM

  5. Power off the FSM

  6. Update the CMM to the latest level

  7. Set password maxage for the USERID user on the FSM to 0 to prevent issues with expiring passwords on the Power Nodes.

See: http://www-01.ibm.com/support/docview.wss?uid=nas8N1010350

Download and/or acquire the following software

  • RedHat Enterprise Linux 7.1 ISO image

    • CentOS 7 may work as well, not tested.

    • Fixes through subscription manager

  • HMC 8.8.4 Recovery DVD

    • Download from fix central, get fixes too.

  • PowerVC 1.3.0.0

    • Acquire through IBM/Business Partner

    • Get fixes from fix central (use search option)

  • Virtual IO server 2.2.4 ISO images

    • Acquire through IBM/Business Partner

    • Download fixes from fix central

NB: You will need software licenses for the HMC, PowerVC and RedHat.

 

Installing the FSM with RedHat Linux

From your personal system, open a session on the IMM of the FSM, and start a remote control session. Choose Virtual Media, and attach the RedHat ISO image to the FSM. Now power on the FSM and it will boot from the DVD.

In the installation menu, wipe the partition data on both disks. You will see two disks:

  • /dev/sda: 200GB, this is the RAID1 SSD, pick this to install on.

  • /dev/sdb: 1 TB, this is the SATA disks, we will use this for the HMC later

  • Select Virtualization Host at the software install options.

  • Select automatic LVM partition scheme

  • Do no set the IP address yet, we will do this later

Press install, and have an espresso, this will be completed shortly.

Configure the FSM networking

The FSM has four network connections, two 1Gbit interfaces are connected to the CMMs, two 1 Gbit connections are connected to the flex switches. As standard, none of these connections is used with VLAN tagging or LACP. We will keep it this way.

These are the location codes/names for the network interfaces:

enp6s0f0: CMM1 network

enp6s0f1: CMM2 network

eno1: 10Gbit adapter connected to switch1

enp12s0f1:: 10Gbit adapter connected to switch2

 

We will create a bond device using fail-over and a ping address on both sets of interfaces, and then create a bridge device that holds the IP address. This is all done through editing/creating configuration files.

First , create a bond device for the CMM interfaces. Specify the IP address of your CMM in the arp_ip_target. Add/change the following files:

 

vi /etc/sysconfig/network-scripts/ifcfg-enp6s0f0

BOOTPROTO=none

ONBOOT=yes

NM_CONTROLLED=no

MASTER=bond0

SLAVE=yes

 

vi /etc/sysconfig/network-scripts/ifcfg-enp6s0f1

BOOTPROTO=none

ONBOOT=yes

NM_CONTROLLED=no

MASTER=bond0

SLAVE=yes

 

vi /etc/sysconfig/network-scripts/ifcfg-bond0:

DEVICE=bond0

ONBOOT=yes

BOOTPROTO=none

USERCTL=no

BONDING_OPTS=”mode=1 arp_interval=60 arp_ip_target=<CMM-IP-ADDRESS>

BRIDGE=br0

 

vi /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0

BOOTPROTO=Static

ONBOOT=yes

TYPE=Bridge

DELAY=0

NM_CONTROLLED=no

IPADDR=<your FSM management IP address here>

PREFIX=24

DEFROUTE=no

IPV6INIT=no

 

Do the same for the front end network, but create bond1/br1.

Change the bond device to ping the default gateway:

arp_ip_target=<GATEWAY-IP-ADDRESS>

 

Add a default gateway and DNS in the bridge device configuration file:

DEFROUTE=yes

GATEWAY=<your GATEWAY IP in the public network>

DNS=<your DNS server>

IPADDR=<FSM public IP address>

If you do not use the 10Gbit network, skip this step and add the above changes to bond0/br0.

 

Start the network with systemctl:

systemctl restart network

 

Validate with:

cat /proc/net/bonding/bond0

cat /proc/net/bonding/bond1

brctl show

 

Set the hostname with:

hostnamectl set-hostname <hostname of FSM public IP address>

Perform system configuration

Install Additional software and create Filesystems for use by KVM.

 

Stop the firewall and NetworkManager:

systemctl disable NetworkManager

systemctl stop NetworkManager

systemctl disable firewalld

systemctl stop firewalld

 

Disable seLinux:

vi seLinux

SELinux=disabled

 

Subscribe to RedHat, see: https://access.RedHat.com/solutions/253273

 

Add repositories:

subscription-manager repos --list

subscription-manager repos --enable <reponame>

 

Install additional software and update to the latest level:

yum -y groupinstall “X Window System” “Gnome Desktop”

yum -y install tigervnc-server httpd virt-manager firefox

yum -y update

 

Set the system to graphical login:

systemctl get-default

systemctl set-default graphical.target

 

The system is configured wit only /dev/sda in the rhel. This can be seen with the following commands:

vgdisplay

pvdisplay

 

First create a partition table on /dev/sdb with one partition, the size of the disk using fdisk:

fdisk /dev/sdb

 

Create a new PV and Extend the volumegroup, and create LVs for iso files and hmc:

pvcreate /dev/sdb1

vgextend rhel /dev/sdb1

lvcreate -n repolv -L 100G rhel /dev/sdb1

lvcreate -n isolv -L 100G rhel /dev/sdb1

lvcreate -n hmclv -L 400G rhel /dev/sdb1

mkfs.xfs /dev/mapper/rhel-repolv

mkfs.xfs /dev/mapper/rhel-isolv

mkfs.xfs /dev/mapper/rhel-hmclv

mkdir /repo

mkdir /iso

mkdir /rhel

mkdir /vms

 

Edit fstab and add entries for the new filesystems, and change /home to mount on /vms. The /home filesystem takes all free space on the SSDs, and we will use that to host the PowerVC VM.

vi /etc/fstab

/dev/mapper/rhel-homelv /vms xfs defaults 0 2

/dev/mapper/rhel-isolv /iso xfs defaults 0 2

/dev/mapper/rhel-hmclv /hmc xfs defaults 0 2

/dev/mapper/rhel-repolv /repo xfs defaults 0 2

 

Define yum repositories to provide to the PowerVC VM. You can also subscribe the PowerVC VM directly to RedHat, in which case you can skip this step.

cd /iso

mkdir rhel71

cd rhel71

reposync –gpgcheck -l –repoid=<REPOID> –download_path=$PWD –downloadcomps –download-metadata

createrepo -g comps.xml $PWD

ln -s /repo/rhel71 /var/www/html/rhel71

systemctl enable httpd

systemctl start httpd

 

Prepare the repository file for use by the PowerVC VM, we will copy it later to /etc/yum.repos.d on that VM

vi /root/rhel71.repo

[rhel71]

name=All of rhel71

baseurl=http:<FSM IP address>///home/rhel71

enabled=1

gpgcheck=0

 

Copy all ISO images needed for PowerVC, VIO, HMC, firmware and fixes into /iso.

Reboot the server. RedHat Configuration is finished.

Configuring KVM with virt-manager

Start vncserver and connect to the desktop. There you can start the virt-manager GUI:


 

First add directory based storage pools for the remaining SSD storage. In the GUI, go to connection details/storage. and add pools for /iso. /vms and /hmc.

 

Next, create storage files for the PowerVC and HMC VM:


 

And a 40GB one on the SSDpool for powerVC. Same procedure.

 


 

 

You have already created the bridge devices in Linux, so there is no work for this in KVM.

 

– Define a VM for PowerVC: 2VCPUs, 10GB memory, 40 GB disk on SSD (/vms).

  – Add a virtual network adapter for both br0 and br1 devices.

– Define a VM for vHMC: 4VCPUs, 8GB memory, 400 GB disk on HD (/hmc).

  – Add a virtual network adapter for both br0 and br1 devices.

Make sure the VMs start automatically at system boot time using virt-manager GUI. This can be set using “Boot Options” in the left pane.

 

Minimum requirements of vHMC:

o 8GB of memory

o 4 processors

o 1 network interface, with a maximum of 4 allowed.

o 160GB of disk space, recommend 700GB to get adequate PCM data

 

See: http://www-01.ibm.com/support/knowledgecenter/8284-22A/p8hai/p8hai_installvhmc.htm

 

 


 

 

Installing the HMC VM

Attach the HMC ISO image to the CDROM drive in the VM, and set the CDROM before the hard disk to boot. Start the VM and choose new installation in the menu on the console screen.

 

My install exited into bash, I typed exit and it continued. Not sure why this happened. YMMV. Have a cup of tea, and then another, this will take a while.

After Installation, perform the following tasks:

  • Configure network interfaces and firewall settings

  • Configure default gateway

  • Configure DNS

  • Configure Call Home

  • Allow remote SSH

  • Allow remote Virtual Terminal

 

Install HMC fixes by connecting the ISO file to the virtual CDROM and install using commandline:

updhmc -t dvd

Commandline fix install is mandatory for fix MH01560!

 

You can now manage the Power Nodes in the flex chassis by creating connections. The password is the same as the CMM USERID Password.

Configure the HMC to perform call-out to IBM in case of hardware failures in the Power nodes. The chassis components are monitored by the CMM, so the CMM also needs to be able to call out to IBM.

 

Installing the PowerVC VM

Attach the RHEL ISO image to the CDROM drive in the VM, and set the CDROM before the hard disk to boot. Start the VM and choose new installation in the menu on the console screen.

 

In the installation setup screen, perform the following actions:

  1. Select the disk to install on, with LVM autopartition.

  2. Select Basic Webserver as the software installation type

  3. Enter the hostname, IP addresses for both network interfaces

  4. Set root password

After the installation finishes, which should take no more than 5 minutes, not enough time for a proper cup of coffee, proceed with preparation tasks:

– Disable selinux, firewalld and NetworkManager as with the KVM host node.

 – Add the rhel71.repo file prepared earlier to /etc/yum.repos.d or subscribe to RedHat. The repository is necessary as the PowerVC installation will need additional RPMs.

– Update the PowerVC node with:

   yum -y update

       Reboot after the update.

 

Install prerequisites for PowerVC:

yum install pyserial python-fpconst python-pyasn1 python-pyasn1-modules python-twisted-core python-twisted-web python-zope-component python-zope-event python-zope-interface SOAPpy python-ndg_httpsclient python-cryptography

 

Install RPMs from epel or download Centos 7.1 versions from http://rpmfind.net

rpm -ihv python-pyasn1-modules-0.1.6-2.el7.noarch.rpm \

  python-twisted-web-12.1.0-4.el7.x86_64.rpm \

  SOAPpy-0.11.6-17.el7.noarch.rpm \

  python-twisted-core-12.2.0-4.el7.x86_64.rpm \

  pyserial-2.6-5.el7.noarch.rpm \

  python-fpconst-0.7.3-12.el7.noarch.rpm \

  python-zope-interface-4.0.5-4.el7.x86_64.rpm

See: https://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.3.0/com.ibm.powervc.standard.help.doc/powervc_installing_rhel_mgmt_vm_hmc.html

 

Unpack the PowerVC 1.3 software, and run the preinstall check:

cd powervc-1.3.0.0

./install -t

 

Fix any issues. especially with hostname! You cannot change the hostname easily after installation. Name resolution should work properly. then run the install for real. This will take about half an hour. More if you install on regular disk. You may go out to have a proper cup of coffee.

 

Once done, you can log in as root on the webserver. PowerVC users are regular system users. Next is to install any fixes. Unpack the update 1.3.0.1 and install it:

cd powervc-1.3.0.1

./update

 

PowerVC is now ready for use.

Installing VIO on Power Node

Acquire the VIO 2.2.4 ISO images, and attach the first to the vHMC. In the classic interface, go to “HMC Management/Manage repository of VIO Images”. You can import the ISO image here. You should end up with something like this:

 


In the Enhanced+ interface this option is also available. Go look for it.

Make sure the Power node does not have SOL enabled. Fix this in the CMM: Compute Nodes/Select yours/General/Deselect Serial Over LAN. If this is not changed, you will not be able to use a console for any LPAR on that Power node.

 

Define a VIO server LPAR to your specifications, which is actually really ease using the Enhanced+ interface. Power it on, and select “Install VIO server” The VIO will now be started in a Hardware discover mode, after which you are presented with a choice of boot adapters. Pick the first one: L1/T1. In the network switch this interface may have VLANs configured, but may not be part of an LACP. If you have multiple VLANs, specify the one used in the HMC in the wizard.


 

Tip: In the Enhanced+ GUI there is an option to follow the installation progress output.

NB: The automatic install from the HMC installs VIO servers on the first available disk. This can be FC or SAS. Do NOT use this mechanism when data needs to be saved on some attached disk.

 

Install the SDDPCM driver on the VIO server. Change fscsi adapters for fastfail/dyntrk. If booting from SAS, use mirrorios to mirror the rootvg volumegroup.

Create an LACP/Etherchannel for use with the Shared Ethernet Adapter. Set the IP address for the Virtual IO server. This concludes the virtual IO server installation.

 

Lastly, download the latest fixes from Fix Central, and copy them to the VIO server and install them:

updateios -dev /home/padmin/updates -install -accept

shutdown -restart

 

The infrastructure is now complete. Next steps are:

– Install new LPAR from HMC/VIO to serve as a golden image for PowerVC

– Configure PowerVC:

   – Manage Power System and gold LPAR

   – Manage SAN switch (Brocade only, qLogic is not supported)

   – Manage V7000

   – Create Storage connectivity groups

   – Define Networks

   – Prepare and capture the golden LPAR.

 

Join The Discussion

Your email address will not be published. Required fields are marked *