Overview

Skill Level: Any Skill Level

This recipe shows how one can share data stored on tiered block storage across computes in availability zones in IBM Cloud. It makes use of SSH file system based on FUSE for mounting remote directories over an SSH connection.

Step-by-step

  1. Introduction

    There might be situations where customer scenario would require sharing of tiered block storage volumes of varied IOPS to be shared across computes in different availability zones. This article leverages SSHFS to overcome this limitation using a very basic service of SSH which is availabile across operating systems flavours. This recipe is specific to IBM Cloud VPC network construct.

    Note: NFS offering in VPC is going to be GAed soon, but one can setup NFS servers and clients and manage them. In this article we are using sshfs to complement NFS to share the volume across VSI located in different AZ.

  2. Architecture

    This recipe is based on Implementing below architecture where in block storage for VPC is mounted on virtual server based out of Dallas 1 zone. This volume is then mounted on virtual servers in zones – Dallas2 and Dallas3. So whatever data is written to Dallas1 volume is replicated to Dallas2 and Dallas3. I will take virtual server hosted in Dallas1 data center as primary server or master, where as servers hosted on Dallas2 and Dallas3 as secondary servers or Slaves.

     

    Screenshot_1

  3. Pre-requisites

    To Implement this scenario one would need to establish passwordless authentication between secondary servers and primary server or master as shown in below diagram. IBM Cloud VPC comes with private key authentication mechanism by default and one can’t establish passwordless authentication that. Hence one would need to change SSH configuration from private key authentication to password based authentication. Install fuse-sshfs library on secondary servers by running below command :

     Screenshot_2_0

    Reboot the machine once after installing fuse-sshfs as many times it throws below error during mounting:

    Transport endpoint is not connected

     or one can manually download the latest verion:

    wget https://github.com/libfuse/sshfs/releases/download/sshfs-3.5.2/sshfs-3.5.2.tar.xz

    Screenshot_2

    The details of virtual Instances are mapped across availability zones as below:

     

    screenshot3

  4. Attach Block Storage Volume to Primary Server

    I tested this solution with 100G storage attached to vs1-az1-instance. 

    Screenshot_5

     

    Screenshot_6

     

    Verify in Instance that volume is attached.

    Screenshot_7

  5. Create Volume on Attached Storage

    Verify that disk is mounted in OS.

    Screenshot_8

     

    Create Volume that we will mount on other two Instances spread across zones.

    Screenshot_9

     

    Screenshot_10

     

    Set xfs file system on the partition.

    Screenshot_11

    Mount the partition to Posix compliant directory.

    Screenshot_12

    Make entry in /etc/fstab to make mount permanent.

    Screenshot_13

    Reboot the virtual server.

    Screenshot_14

     

  6. Passwordless Authentication Between Master Server and Slaves Servers

    Make entry in /etc/hosts of virtual servers in az2 and az3 for virtual server in az1.

    Screenshot_15

    Enable passwordless authentication between virtual server on az2, az3 – Secondary and that of az1 – Primary.

    Screenshot_16

     

    Screenshot_17

    This establishes the passwordless SSH authentication between secondary and primary server.

     

  7. Mount Remote File System of Primary Instance on Secondary Instace

    We have now reached the stage where we need to mount volume that we provisioned in Primary instance in availability zone 1 named as vs1-az1-instance on storage of Secondary instances – vs1-az2-instance and vs1-az3-instance.

    The command that I will use for mounting is  –

    sshfs root@vs1-az1-instance:/shareddata <<Mount_Point>>

    where mount points are :

    a) /root/vs1-az2-mount – Availability Zone 2

    b) /root/vs1-az3-mount – Availability Zone 3

     

    vs1-az2-instance

    Screenshot_18

    Make mount permanent by making entry in /etc/fstab

    Screenshot_19

    Note: To auto-remount on primary server reboot make below entries in your /etc/fstab.

    Screenshot_25_fstab-1

    root@vs1-az1-instance:/shareddata /root/vs1-az2-mount fuse.sshfs allow_other,_netdev,noatime,reconnect,auto 0 0

    Note: If one wants to use private key of primary host to mount the file system, he can leverage below string. IdentityFie reference is what needs to be given.

    sshfs#user@server:/remote/folder /local/mount/dir fuse IdentityFile=sshkeyfile,Port=XXX,uid=1000,gid=1000,allow_other,_netdev,ServerAliveInterval=45,ServerAliveCountMax=2,reconnect,noatime,auto 0 0

    Which if not there would not remount the volume in case of primary server reboot and one would need to reboot secondary servers to make that happen.

    Reboot machine and verify if mount still persists.

    Screenshot_20

    vs1-az3-instance

    Screenshot_22

     

    Screenshot_22

    We are now ready for testing the solution.

  8. Testing the Solution

    The virtual servers and storage that we configured above are as below:

    Screenshot_23

     

     

  9. Conclusion

    Using SSHFS is a no cost solution to synchronize the data across virtual servers located in different availability zones.

  10. References

    a) https://www.redhat.com/sysadmin/sshfs

    b) https://stackoverflow.com/questions/19971811/sshfs-as-regular-user-through-fstab#:~:text=Using%20option%20allow_other%20in%20%2Fetc%2Ffstab%20allows%20other%20users,users%20than%20root%20can%20access%20to%20mount%20point.

    c) https://unix.stackexchange.com/questions/14143/what-is-a-better-way-to-deal-with-server-disconnects-of-sshfs-mounts

    d) https://linuxize.com/post/how-to-use-sshfs-to-mount-remote-directories-over-ssh/

    e) https://jamesnbr.wordpress.com/2019/10/24/install-sshfs-3-5-2-on-centos-8-rhel-8/

     

Join The Discussion