This article provides the steps to upgrade OpenStack operating system from Kilo to Mitaka.


OpenStack is a cloud operating system that manages large pools of compute, storage and networking resources in a datacenter. It provides a dashboard through which admins can control the resources. Users can also use the dashboard to provision resources through a web interface.

In this article, we will discuss the steps for upgrading the OpenStack operating system release from Kilo to Mitaka in a Red Hat Enterprise Linux 7 environment, where you have one controller, neutron server, and compute nodes. These are a component/service level upgrade steps for non-HA environment with minimum down time. The OpenStack components are upgraded one after the other.

Planning an OpenStack upgrade

The following points will help you plan for a successful OpenStack upgrade:

  1. Identify any potential incompatibilities between releases by reading the OpenStack release notes.
  2. Decide on the appropriate method for the upgrade.
  3. Ensure that you are able to roll back if the upgrade fails.
  4. Ensure that your data is backed up, including configuration files and databases.
  5. Based on SLAs for your services, determine the acceptable downtime and inform users about the downtime in advance.
  6. Use a test environment to verify that the selected upgrade method will work for your production environment.


Before you upgrade, clean the environment to ensure a consistent state. For example, if some instances are not fully purged from the system after deletion, unexpected behavior might occur.

For environments using the OpenStack Networking service (neutron), verify the release version of the database.

Taking a Backup

  1. Take a backup of the current configurations and database.
  2. Save the configuration files on all nodes.
    Sample code:
    # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \
    do mkdir $i-RELEASE_NAME; \
    # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \
    do cp -r /etc/$i/* $i-RELEASE_NAME/; \

    Note: You can modify this example script on each node to handle different services.

  4. Backup the entire database of your production data. Restoring from backup is the only method available to retrieve a previous database version, since database downgrade is not supported in the Kilo release.

Sample code:
# mysqldump -u root -p --opt --add-drop-database --all-databases > RELEASE_NAME-db-backup.sql

Managing Repositories

On all nodes:

  1. Delete the repositories of the previous release (Kilo) packages.
  2. Add the repository for the new release (Mitaka) packages.
  3. Update the repository database.

Upgrading OpenStack

Sequence for upgrading services

The sequence for upgrading the OpenStack services is important as upgrading services in wrong order can break the cloud easily. The following order is recommended:

  1. Upgrade database
  2. Upgrade RabbitMQ
  3. Upgrade Memcached
  4. Upgrade OpenStack Identity service (Keystone)
  5. Upgrade the OpenStack image service (Glance)
  6. Upgrade OpenStack compute (Nova)
  7. Upgrade OpenStack networking (Neutron)
  8. Upgrade the OpenStack dashboard (Horizon)
  9. Upgrade the OpenStack orchestration (Heat)

Upgrading the database

Most OpenStack services use an SQL database to store information. The database usually runs on the controller node. The following procedures describe the steps for MariaDB. OpenStack services also support other SQL databases including PostgreSQL.

Before upgrading the database, check the current version and available version of MariaDB.

Details are as follows:

Package Version
Installed Packages
mariadb.x86_64 1:5.5.44-2.el7
mariadb-server.x86_64 1:5.5.44-2.el7
Available Packages for Mitaka
mariadb.x86_64 3:10.1.20-1.el7
mariadb-server.x86_64 3:10.1.20-1.el7
python2-PyMySQL.noarch 0.7.9-2.el7

Upgrade MariaDB and MariaDB-server using yum upgrade command and install python2-PyMySQL. After upgrading, ensure that MariaDB service is running fine.

Before upgrading, verify the current release and new release available for upgrade. Also, ensure the service is running without any issues.

Details are as follows:

Package Version
Installed Packages
memcached.x86_64 1.4.25-1.el7
Available Packages for Mitaka
memcached.x86_64 1.4.33-2.el7

Upgrade Memcached to 1.4.33-2.el7 using yum upgrade command. Ensure Memcached.service is running fine after upgrade.

Upgrading Keystone

The OpenStack Identity service, Keystone provides a single point of integration for managing authentication, authorization, and service catalog services. During the keystone upgrade, you won’t be able to create any new instances and dashboard will not work properly.

Identity services requests are served on ports 5000 and 35357. OpenStack uses the Apache HTTP server with mod_wsgi for this purpose. By default, the keystone service still listens on these ports. Therefore, this manually disables the keystone service.

Before upgrading, verify the current release and new release available for upgrade. Also, ensure the service is running without any issues.

Details are as follows:

Package Version
Installed Packages
httpd.x86_64 2.4.6-40.el7
mod_wsgi.x86_64 3.4-12.el7_0
openstack-keystone.noarch 2015.1.2-1.el7
Available Packages for Mitaka
httpd.x86_64 2.4.6-40.el7_2.4
openstack-keystone.noarch 1:9.2.0-1.el7

Note: mod_wsgi doesn’t require any upgrade.

Once the above packages are upgraded, validate the keystone file with keystone release note for Mitaka release and perform essential changes.

Along with keystone packages a few other packages need upgrade at this point for HTTPD service to get initiated.

Depending on the services installed on the environment, the following packages require upgrade:

  • python-openstackclient.noarch
  • python-novaclient
  • python-heatclient
  • python-glanceclient.noarch
  • python-keystoneclient.noarch
  • python-neutronclient.noarch

Once upgraded, the above components synchronize the keystone database with following command:
[root@controller keystone]# su -s /bin/sh -c "keystone-manage db_sync" keystone

Now restart the HTTPD service and ensure it is running error-free.

After keystone package upgrade recreate endpoints for identity with V3.

In Kilo release, Identity service endpoint was V2. In Mitaka this needs to be changed to V3. To change, create the endpoints with V3 and delete the older ones.

Follow these steps to create new endpoints:

  1. Configure the Identity API version:
    Note: Make above change in environment .sh file as well.

  3. Create the identity service API endpoints:

  4. $ openstack endpoint create --region RegionOne \
    identity public http://controller:5000/v3
    $ openstack endpoint create --region RegionOne \
    identity internal http://controller:5000/v3
    $ openstack endpoint create --region RegionOne \
    identity admin http://controller:35357/v3

Upgrading the image service (Glance)

Glance is the Image service for discovering, registering, and retrieving virtual machine images. You can use its REST API to query virtual machine image metadata and retrieve an actual image. The virtual machine images can be stored in various locations from simple file systems to object storage systems such as OpenStack Object Storage. Glance runs in the controller node.

Before upgrading, verify the current release and new release available for upgrade. Also, ensure the service is running without any issues.

Details are as follows:

Package Version
Installed Packages
openstack-glance.noarch 2015.1.0-3.el7
Available Packages for Mitaka
openstack-glance.noarch 1:12.0.0-1.el7

Once upgraded, make essential changes in /etc/glance/glance.conf as per the release notes of Mitaka. Then, synchronize the glance DB with following command:

# su -s /bin/sh -c "glance-manage db_sync" glance

On completion of upgrade ensure that openstack-glance-api.service and openstack-glance-registry.service is running.

Test the functionality of glance by listing current images. Also, try to create some test images via glance.

Upgrading the compute service

The compute service in both controller and compute node needs to be upgraded. In OpenStack Kilo, there is no nova-API service. The Nova-API service is required for accepting and responding to end user compute API calls. The OpenStack Compute API, Amazon EC2 API, and a special Admin API for privileged users to perform administrative actions are supported by this service. It enforces some policies and initiates most orchestration activities, such as running an instance. This is the additional nova service that needs to be installed while upgrading Kilo to Mitaka.

  1. Upgrade Nova services on controller.
  2. Before upgrade, in Kilo, nova endpoints are with V2.0. You have to recreate it into V2.1. Create Public, Internal, and Admin endpoints for nova with V2.1 using openstack endpoint create command and delete the old endpoints with V2.0.

  3. Install openstack-nova-api:
  4. # yum install openstack-nova-api

  5. Create nova_api database:
  6. CREATE DATABASE nova_api;

  7. Grant appropriate access to the databases:
  8. GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \

    After installing nova-api and creating the database, add the following parameters in /etc/nova/nova.conf file in controller.
    In the [DEFAULT] section, enable only the compute and metadata APIs:
    [DEFAULT] enabled_apis = osapi_compute,metadata
    In the [api_database] section, configure database access:
    [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

  9. Upgrade Nova services in controller:
  10. In a controller node, below are the services which require upgrade to Mitaka.

    Package Version
    Installed Packages
    openstack-nova-api.noarch 2015.1.0-3.el7
    openstack-nova-cert.noarch 2015.1.2-1.el7
    openstack-nova-common.noarch 2015.1.2-1.el7
    openstack-nova-conductor.noarch 2015.1.2-1.el7
    openstack-nova-console.noarch 2015.1.2-1.el7
    openstack-nova-novncproxy.noarch 2015.1.2-1.el7
    openstack-nova-scheduler.noarch 2015.1.2-1.el7
    Available Packages for Mitaka
    openstack-nova-api.noarch 1:13.1.2-1.el7
    openstack-nova-cert.noarch 1:13.1.2-1.el7
    openstack-nova-cert.noarch 11:13.1.2-1.el7
    openstack-nova-conductor.noarch 1:13.1.2-1.el7
    openstack-nova-console.noarch 1:13.1.2-1.el7
    openstack-nova-novncproxy.noarch 1:13.1.2-1.el7
    openstack-nova-scheduler.noarch 1:13.1.2-1.el7
    [root@controller nova]#

    Upgrade the packages using yum upgrade command.

    After the upgrade, edit the nova.conf file available at /etc/nova/nova.conf as per the Mitaka release note. Few syntax level changes and some parameters are obsolete in nova.conf for Mitaka release, which needs to be taken care.

    After executing the above steps, populate the compute databases:

    # su -s /bin/sh -c "nova-manage api_db sync" nova
    # su -s /bin/sh -c "nova-manage db sync" nova

    Note: Ignore any deprecation messages in this output.

  11. Upgrade Nova services in Compute.

Openstack-nova-compute is the service which must be upgraded to Mitaka version. Upgrade using yum upgrade command. After the upgrade, modify the /etc/nova/nova.conf file in compute node as per the Mitaka release note for nova.

On completion, ensure that openstack-nova-compute service is started and running without any errors.

Finally, ensure that all nova related services are started and running without any errors. Validate the functionality by checking the status of existing instances and spawning new instances.

Upgrade Neutron service

OpenStack Networking service (Neutron) is the major service that requires upgrade after proper validation. Before the upgrade, ensure that the networking of all instances is stable.

Identify the bridging mechanism used by neutron service, LinuxBridge, or OVSBridge. In this scenario, use OVSBridging mechanism by neutron and separate neutron server.

Identify the neutron agents running on the servers and their status before upgrade.

You have L3 agents, Metadata agent, DHCP agent, and Open vSwitch agent running on neutron server. The packages that provide these agents need to be upgraded.

Note: While upgrading neutron, there may be a minimal downtime, as you won’t be able to spawn new instances or new networks. When restarting services, there may be a network outage, though minimal.

The following packages in neutron server needs to be upgraded to Mitaka release.

  • openstack-neutron.noarch
  • openstack-neutron-common.noarch
  • openstack-neutron-lbaas.noarch
  • openstack-neutron-ml2.noarch
  • openstack-neutron-openvswitch.noarch

After the upgrade, there are some configuration level changes that need to be done with neutron files. In the Kilo release, the OVS-related settings are part of /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file. However, in Mitaka, these settings need to be updated in /etc/neutron/plugins/ml2/openvswitch_agent.ini file.

Also, make essential changes in /etc/neutron/neutron.conf, ml2.conf file and any neutron component files as per the Mitaka release notes.

After completing the above steps, ensure that all neutron related services are running without any errors.

In compute node, upgrade neutron-openvswitch-agent.service, which is required for instances networking.

Validate the upgrade by checking the current network components functionality, instances networking status, and spawn new networks, subnets, router, and instances.

Upgrading the dashboard (Horizon)

The package openstack-dashboard.noarch needs to be upgraded for Mitaka. Refer to the Mitaka release notes for dashboard and for any configuration and syntax changes in dashboard-related files.

After the upgrade, restart the HTTPD service and validate the functionality by opening horizon dashboard URL.

Upgrading orchestration (Heat) service

The orchestration service provides a template-based orchestration for describing a cloud application by running OpenStack API calls to generate running cloud applications. The software integrates other core components of OpenStack into a one-file template system. The templates allow you to create most OpenStack resource types such as instances, floating IPs, volumes, security groups, and users. It also provides advanced functionality such as instance high availability, instance auto-scaling, and nested stacks. This enables OpenStack core projects to receive a larger user base.

The service enables the deployer to integrate with the orchestration service directly or through custom plug-ins. The orchestration service runs in controller node. Before upgrading, verify the current release and new release available for upgrade. Also, ensure the service is running without any issues.

Details are as follows:

Package Version
Installed Packages
openstack-heat-api.noarch 2015.1.2-1.el7
openstack-heat-api-cfn.noarch 2015.1.2-1.el7
openstack-heat-engine.noarch 2015.1.2-1.el7
python-osprofiler.noarch 0.3.0-1.el7
Available Packages for Mitaka
openstack-heat-api.noarch 1:6.1.0-1.el7
openstack-heat-api-cfn.noarch 1:6.1.0-1.el7
openstack-heat-engine.noarch 1:6.1.0-1.el7
python-osprofiler.noarch 1.2.0-1.el7

Upgrade the above packages using yum upgrade command. Refer to he release notes of Mitaka for heat.conf and modify the /etc/heat/heat.conf file as required.

After the upgrade and config changes, synchronize the heat DB using the following command and restart openstack-heat-api.service, openstack-heat-api-cfn.service, and openstack-heat-engine.service

# su -s /bin/sh -c "heat-manage db_sync" heat

Validate the functionality of heat services by creating a stack and confirm if the existing stacks are running without any issues.


This section provides guidance for rolling back to a previous release of OpenStack.

Warning: As you might lose any data that was added since the backup, rolling back must be used as a last resort only.

A common scenario that might require rollback is when we take down production management services for an upgrade, complete part of the upgrade, and then discover problems that we did not encounter during testing. This requires rolling back the environment to the previous “known good” state.

Ensure that you did not make any state changes after starting the upgrade process. For examples, new instances, networks, storage volumes, etc. Any such new resources will be in frozen state after the rollback.

The sequence of steps to successfully rollback your environment are:

  1. Roll back configuration files.
  2. Restore databases from backup.
  3. Roll back packages.

First, verify that you have the required backups to restore. Also keep in mind that broken downgrades are more difficult to troubleshoot than broken upgrades. Weigh the risk carefully between trying to push a failed upgrade forward versus rolling it back.

To perform a rollback:

  1. Stop all OpenStack services.
  2. Copy contents of configuration backup directories that you created during the upgrade process back to /etc/ directory.
  3. Restore databases from the -db-backup.sql backup file that you created with the mysqldump command during the upgrade process:
  4. # mysql -u root -p < RELEASE_NAME-db-backup.sql

  5. Downgrade OpenStack packages.

Join The Discussion

Your email address will not be published. Required fields are marked *