Abstract

This article provides an architectural overview of OpenStack high availability for Red Hat Linux environment. It includes descriptions of the standalone, shared and cluster resources required to set up this configuration.


Executive summary

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

The OpenStack project is a global collaboration of developers and cloud computing technologists producing the open standard cloud computing platform for both public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated programs delivering various components for a cloud infrastructure solution.

Most high availability systems guarantee protection against system downtime and data loss only in the event of a single failure. There are several enterprise solutions for OpenStack HA that are ready to use. But there is no HA solution available yet for community driven OpenStack Master project. This document describes the architecture to configure community OpenStack with high availability.


Scope of the article

This article provides the architectural overview to configure OpenStack with high availability for controller and network stack.

This configuration was implemented for the IBM Managed Platform as a Service (MPaaS) solution, which provides services for securely auto-provisioning and managing middleware stacks across on premises and cloud platforms.

Ceph storage and compute node configuration are beyond the scope of this document. It only includes the OpenStack services that are used by IBM MPaaS services (MariaDB, Keystone, Nova, Neutron, Glance, Cinder, Heat, and Horizon).


Physical infrastructure overview

In IBM MPaaS, controller and network nodes were combined and running as a single node for HA. There are three such nodes in a cluster. The Pacemaker cluster manages most of the OpenStack services.
There are three networks in the MPaaS environment:

  • Private network for management.
  • Public network that is connected to the Controllers, which provides internet connectivity to the tenants and public IP, if required.
  • Over-relay network, which is a local network among compute nodes and controller nodes. This over-relay network is used by OpenStack to create VXLAN (virtual extensible LAN) tunnels for tenant SDNs (Software Define Networks). The same management and over-relay network is also used for configuring a cluster.



OpenStack component architecture for HA

OpenStack system consists of multiple independent components (stacks), which are very loosely coupled. Each component uses multiple services. Each service needs to be configured for HA separately.


Clustering overview

The Pacemaker cluster is used to manage all services except MariaDB, RabbitMQ, and memcached. MariaDB and RabbitMQ services are configured to replicate each other internally; however, MariaDB and RabbitMQ services are accessed using the VIP, which is a cluster resource. Memcached does not require any replication as it can work independently.

Stateless services are configured as Active/Active and stateful services are configured in Active/Passive mode in Pacemaker and HA proxy load balancing options.



Standalone services

Memcached

Memcached does not support typical forms of redundancy such as clustering. OpenStack services can use almost any number of instances by configuring multiple host names or IP addresses. Hence, memcached can be installed on all nodes and update the client configuration with all the three control nodes.

Shared services

MariaDB Galera cluster

MariaDB is used to store all information inside the OpenStack. It uses different databases for Nova, NovaAPI, Keystone, Cinder, Glance, Neutron and heat. Internal syncing of database is done by configuring Galera cluster as follows:

  1. Install MariaDB on all hosts along with mariadb-galera-server package.
  2. Configure MariaDB to listen only on management IP of the server. (This is required to use HAProxy VIP binding to DB port.)
  3. Configure InnoDB and WSREP Replication among nodes.
  4. Use galera_new_cluster command to initialize the DB on any one node and then start MariaDB as normal on other nodes.
  5. Run mysql_secure_installation for all nodes individually.

Now MariaDB is ready for use. Any node can be connected for database query. HAProxy virtual IP is used for connecting from OpenStack services.


RabbitMQ

RabbitMQ is the messaging (AQMP) service widely used for OpenStack. Load balancer is used for RabbitMQ if the Red Hat kernel version is above 3.10.0-327. However, it has been observed that RabbitMQ uses long term TCP connections that frequently get disconnected if the connection is via load balancer. Hence, RabbitMQ needs to be configured to listen to all the interfaces of the server. So, it also listens to the Virtual IP directly. Thus, clients are connected to the host that has VIP.

Note: Client configuration must be updated with VIP.
The following steps are used to implement the above configuration:

  1. Install RabbitMQ on all nodes.
  2. Configure RabbitMQ for HA queues.


Pacemaker cluster

Red Hat Linux is used for the OpenStack controllers. To use pacemaker configuration, Red Hat HA subscription needs to be enabled on all the controllers.
Note: Do not use systemctl command to manage services (resources) under pacemaker. Instead use pcs resource command to manage the services.

  1. Install the required packages, namely, pacemaker,pcs,corosync, fence-agents, resource-agents and libqb.
  2. Configure and start pcsd on all nodes
  3. Set a password (same password) on all the nodes for the user ID. The password must be non-expiring.
  4. Setup cluster with authentication from any one of the nodes.

Note: Corosync is configured automatically, but the service needs to be enabled and started manually. Since no shared file systems are used for HA, there is no need to consider about any split-brain situation, hence STONITH service is not used.

Cluster resources

Virtual IP

Virtual IP is an OCF resource, which can freely float among cluster nodes. This is an Active/Passive resource.


HAProxy load balancer

A HAProxy load balancer with virtual IP is used to manage all services under the Pacemaker cluster. The load balancer must set affinity with the node on which the virtual IP is running. The load balancer needs to be installed and configured individually on all servers.

All of the services are configured for load balancing based on the source IP affinity. The following table shows the ports that are bound to the HAProxy and corresponding services.

MariaDB Galera cluster uses a special mechanism to identify the MariaDB health status, which is a special script running under xinetd services.

Service Port
galera_cluster 3306
glance_api 9292
dashboard 80
keystone_admin 35357
heat_cf_api 8000
neutron_api 9696
nova_vncproxy 6080
heat_api 8004
nova_ec2_api 8773
nova_compute 8774
nova_metadata 8775
glance_registry 9191
cinder_api 8776
keystone\internal 5000


Apache httpd

The Web server must be configured as a cluster resource. This should be an active/active resource.

Keystone (ports 5000 and 35357) and Dashboard (port 80) are used for OpenStack. Ensure that these ports are listening only on the management interface by changing the Listen directive to management IP of the host on httpd configuration.


Keystone

The Keystone service provides authentication token to all OpenStack services. MariaDB must be up and running before configuring Keystone. Keystone uses two ports 5000 and 37375 and both of these ports can be under load balancer. Keystone does not have any separate services. These two ports are opened by Web service (httpd).

  1. Install keystone services along with mod_wsgi package on all servers.
  2. Update configuration identically on all the controllers and update DB from any one node.
  3. Do “fernet_setup” and “credential_setup” from any node, then copy credential-keys and fernet-keys to the other nodes.
  4. Restart httpd using pcs commands.


Horizon (OpenStack Dashboard)

OpenStack Dashboard is the Web GUI to manage OpenStack. This also works under httpd service. It uses port 80/dashboard directory. Though httpd runs as active, active mode port 80 in the dashboard needs to be configured in fail-over mode in the HAProxy configuration.

Note: Multiple memcached is not supported in the dashboard configuration. Hence, the virtual IP was used in the configuration, so that it connects to the server that has a VIP attached.


Glance

The image service: Glance enables users to discover, register, and retrieve virtual machine images. It has two services and two configurations: Glance API service and Glance Registry Service.
The following configurations must be done:

  • Both services must be configured to listen only to Management IPs and both are managed by HAProxy.
  • The Glance-API configuration also must be updated with VIP to reach the Glance registry.
  • The Glance Registry configuration must be updated with VIP to reach the Glance-API service.


Cinder

The block storage service: Cinder provides block storage devices to guest instances. There are multiple back-ends that can be used behind cinder to provide storage space, such as NAS/SAN, NFS, iSCSI, Ceph, and more. MPaaS uses Ceph; however, Ceph documentation not included in this document as it is beyond the scope of this document.

Cinder has multiple services but a single configuration file. Cinder API service needs to listen only to the management interface of each controller. The API port can be under load balancer.

Cinder volume service manages the volume creation and allocation. This will be an Active/Passive service under Pacemaker.


Nova

Nova is the compute service in OpenStack, which manages instance creation and management, Though the instances will be launched from the compute nodes, the Nova service centrally manages from the controller. Five services used by Nova on the controller nodes. Two services, nova-consoleauth and nova-novncproxy must be configured in Active/Passive mode, and they must run on same node. These two services together provide the console access to the VMs.

Nova API service, metadata, vncserver, novncproxy services must use the management IP of each controller node. “vncserver_proxyclient_address” is the VIP of the server.


Neutron

OpenStack networking service Neutron allows you to create and attach interface devices managed by other OpenStack services to networks. Plug-ins can be implemented to accommodate different networking equipment and software, providing flexibility to OpenStack architecture and deployment.

This is the most important and complex module in OpenStack. Some implementations dedicate separate node(s) as the Neutron server, but in MPaaS, the controller and neutron server were used together.

Five different services were used for Neutron. More services can be used, depending on the additional plug-ins used. The Neutron API service must listen on the management IP of the controller interface. All five neutron services are in Active/Active mode and the API is under HAProxy. HARRP configuration was used for the network redundancy in layer 3. Also, multiple DHCP and metadata agents were used for redundancy.

HARRP internally uses Active/Passive virtual routers (qrouters). It uses the Keepalived service to monitor the status of each qrouters. At a time only one of the routers will be active and other routers will be on standby. The network outage is not more than 10 seconds in case of failure of an active qrouter node.


Heat

Heat is an optional Orchestration service in OpenStack, which is used to automate complex tasks and integrate OpenStack with other applications. It provides an API interface. Heat uses cloud formation YAML templates to specify configuration requirements.

Heat API services can be Active/Active but the heat engine must be Active/Passive. There are two Heat API services, which must listen only on the management interface.


References

  1. MPaaS on IBM Marketplace
    https://www.ibm.com/us-en/marketplace/7983
  2. OpenStack HA Guide (draft document that is not error-free)
    https://docs.OpenStack.org/ha-guide/
  3. OpenStack installation guide for Newton Version on CentOS and RHEL
    https://docs.OpenStack.org/newton/install-guide-rdo/
  4. MPaaS public cloud version
    https://mpaas.ibm.com/mpaas

Join The Discussion

Your email address will not be published. Required fields are marked *