Verify Red Hat OpenShift Data Foundation deployment on IBM Power Systems

Introduction

Red Hat OpenShift Data Foundation—previously Red Hat OpenShift Container Storage—is software-defined storage for containers.

The first tutorial in this series provided an overview of Red Hat OpenShift Data Foundation and step-by-step instruction to deploy OpenShift Data Foundation on IBM Power Systems. This tutorial shows how to verify that you have successfully deployed OpenShift Data Foundation on IBM Power Systems.

Prerequisites

Before proceeding with the validation procedure, you need to make sure that you have a fully functioning OpenShift Data Foundation on OpenShift Container Platform.

Estimated time

In order to complete the validation steps of OpenShift Data Foundation deployment, you will need about an hour’s time.

Steps

Perform the following steps to verify if the OpenShift Data Foundation deployment is successfully completed.

  1. Run the following command to get a list of the pods in the openshift-storage namespace:

    # oc get pods -n openshift-storage
    NAME                                                              READY   STATUS      RESTARTS   AGE
    csi-cephfsplugin-4x9hh                                            3/3     Running     0          31h
    csi-cephfsplugin-cvs8r                                            3/3     Running     0          31h
    csi-cephfsplugin-provisioner-6878df594-6d4xb                      6/6     Running     0          31h
    csi-cephfsplugin-provisioner-6878df594-cdb7x                      6/6     Running     0          31h
    csi-cephfsplugin-s24ft                                            3/3     Running     0          31h
    csi-rbdplugin-6swsc                                               3/3     Running     0          31h
    csi-rbdplugin-jkvc7                                               3/3     Running     0          31h
    csi-rbdplugin-lx4x5                                               3/3     Running     0          31h
    csi-rbdplugin-provisioner-85f54d8949-qf7vl                        6/6     Running     0          31h
    csi-rbdplugin-provisioner-85f54d8949-vfkb9                        6/6     Running     0          31h
    noobaa-core-0                                                     1/1     Running     0          31h
    noobaa-db-pg-0                                                    1/1     Running     0          31h
    noobaa-endpoint-85df6bdc65-4cp8k                                  1/1     Running     0          31h
    noobaa-operator-7c78fd8589-c9rt6                                  1/1     Running     0          31h
    ocs-metrics-exporter-844cd5988b-khg7b                             1/1     Running     0          31h
    ocs-operator-7dcc6dc48f-4kzjw                                     1/1     Running     0          31h
    rook-ceph-crashcollector-worker-0-86dbbb56b4-shvtf                1/1     Running     0          31h
    rook-ceph-crashcollector-worker-1-64f6dbbbb5-sjpb6                1/1     Running     0          31h
    rook-ceph-crashcollector-worker-2-5c56b98b8b-hrztk                1/1     Running     0          31h
    rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-59794b74kpxz8   2/2     Running     0          31h
    rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6dfc968dshnzc   2/2     Running     0          31h
    rook-ceph-mgr-a-8695fcd694-zz4l5                                  2/2     Running     0          31h
    rook-ceph-mon-a-57cb76b758-kk8d8                                  2/2     Running     0          31h
    rook-ceph-mon-b-7df48c5bfd-n4f2g                                  2/2     Running     0          31h
    rook-ceph-mon-c-6b99b7bb8c-ndnk8                                  2/2     Running     0          31h
    rook-ceph-operator-5c98f687bc-gdd4l                               1/1     Running     0          31h
    rook-ceph-osd-0-745bd95b78-ghhn5                                  2/2     Running     0          31h
    rook-ceph-osd-1-77d5949bf7-9tsj9                                  2/2     Running     0          31h
    rook-ceph-osd-2-55df7b46b9-sk7n5                                  2/2     Running     0          31h
    rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-06x6vjhwg   0/1     Completed   0          31h
    rook-ceph-osd-prepare-ocs-deviceset-localblock-1-data-0dgvhn98z   0/1     Completed   0          31h
    rook-ceph-osd-prepare-ocs-deviceset-localblock-2-data-0r94tzccj   0/1     Completed   0          31h
    rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-575476dgxfph   2/2     Running     0          31h
    
  2. Make sure that all the pods in the openshift-storage namespace are in healthy condition, in either Completed or Running state. These pods could belong to any of the following categories:

    • csi – These pods provide the storage interface backend to the underlying Ceph storage.
    • noobaa – These are the core, database, endpoint, and operator pods for object storage implementation.
    • rook-ceph – Rook is the orchestrator of Ceph implementation and maintenance in OpenShift environment. These pods provide file and object storage functionality. Specifically, the object storage device (OSD) pods manage the underlying storage for Ceph software across OpenShift Data Foundation worker nodes.

    To determine the health of the Ceph layer itself, install an additional toolbox pod. This toolbox pod is not included with the OpenShift Data Foundation software, and therefore, it has to be deployed manually to determine the health and to run other useful commands in the Ceph layer, if needed.

    # git clone https://github.com/rook/rook.git
    Cloning into 'rook'...
    remote: Enumerating objects: 78477, done.
    remote: Counting objects: 100% (96/96), done.
    remote: Compressing objects: 100% (86/86), done.
    remote: Total 78477 (delta 40), reused 24 (delta 8), pack-reused 78381
    Receiving objects: 100% (78477/78477), 41.37 MiB | 9.85 MiB/s, done.
    Resolving deltas: 100% (54687/54687), done.
    
    # cd rook/cluster/examples/kubernetes/ceph
    
  3. In the current directory, find a file called toolbox.yaml. Modify this file according to your environment.

    • Change namespace to openshift-storage
    • Change image to registry.redhat.io/ocs4/rook-ceph-rhel8-operator
  4. Install the toolbox using the oc command:

    # oc create -f rook/cluster/examples/kubernetes/ceph/toolbox.yaml
    deployment.apps/rook-ceph-tools created
    

    Now, you should see a pod in the openshift-storage namespace.

    # oc get pods -n openshift-storage | grep tools
    rook-ceph-tools-74b7dcfcb6-v64gg 1/1 Running 0 72s
    
  5. Check the health status of Ceph in OpenShift Data Foundation using the following command:

    # oc rsh -n openshift-storage rook-ceph-tools-74b7dcfcb6-v64gg ceph -s
      cluster:
        id:     2b319a34-112c-4390-827b-3df526225e41
        health: HEALTH_OK
    
      services:
        mon: 3 daemons, quorum a,b,c (age 31h)
        mgr: a(active, since 31h)
        mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-b=up:active} 1 up:standby-replay
        osd: 3 osds: 3 up (since 31h), 3 in (since 31h)
        rgw: 1 daemon active (ocs.storagecluster.cephobjectstore.a)
    
      task status:
        scrub status:
            mds.ocs-storagecluster-cephfilesystem-a: idle
            mds.ocs-storagecluster-cephfilesystem-b: idle
    
      data:
        pools:   10 pools, 272 pgs
        objects: 429 objects, 397 MiB
        usage:   4.0 GiB used, 1.5 TiB / 1.5 TiB avail
        pgs:     272 active+clean
    
      io:
        client:   2.8 KiB/s rd, 38 KiB/s wr, 3 op/s rd, 3 op/s wr
    

    You can replace ceph -s with any known ceph or rbd command of your choice. You can try the following commands:

    • ceph health – To provide a simple one-line health status of Ceph
    • ceph osd tree – To dump osd map as a tree structure
    • ceph df – To find the ceph cluster’s free space status
    • ceph osd df – To find out how much space is allocated for each OSD and how much is available, and so on (OSD disk usage)
    • ceph versions – To check the running versions of ceph daemons (mgr, mon, osd, rgw, mds and so on)

    Note:
    If you see errors after performing this step (step 5), run the following command:

    oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch  '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'
    

    The oc patch command will replace the existing Ceph toolbox pod with a new one. So you need to run the oc get pod command to get the toolbox pod name.

    oc get pods -n openshift-storage | grep tools
    
  6. Check the storage classes created by OpenShift Data Foundation, and thus verify that the deployment is successful.

    # oc get storageclasses
    NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  32h
    ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   31h
    ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  31h
    ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   31h
    openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  31h
    

Summary

This tutorial must have helped you to verify that Red Hat OpenShift Data Foundation is deployed successfully. Refer to subsequent tutorial to learn how to use Ceph Storage.

Refer to the following tutorials for more information: