Skill Level: Any Skill Level

The MDMS service allows you to bulk move large amounts of data (i.e. 120 TBs) of data via a portal NAS device. Using this service, you can migrate existing VMs from your on-prem VMWare environments to IBM Cloud hosted VMWare hosts using an NFS datastore.


<This recipe is a work in progress>

VMWare on-prem server with a NFS Datastore (not vSan only)

VMWare IBM Cloud environment, including the vSphere Web Client

Mass Data Migration Service (MDMS) device



  1. Order the MDMS service

    The Mass Data Migration Service (MDMS) can be ordered from the Softlayer Control Portal. The ordering process requires selecting a target COS bucket and setting IP configuration information for your source data center. To simplify installation and use on site, it is highly recommended to have all the necessary network connections allocated by your network provider prior to ordering the device. Once the order is submitted and approved, the device will be shipped to your data center. Detailed instructions on the use of the device can be found in the User Guide.

    Before continuing to the next step, make sure your device is on the network and the storage pool is unlocked.

  2. Connect MDMS to VMWare server

    The MDMS device is loaded via an NFS share. Since ESXi allows mounting of NFS shares, the quickest and easiest way to connect the device is using this feature of ESXi. The following steps in the vSphere Client will allow you to connect the MDMS device to the ESXi host. Similar steps can be completed directly in the ESXi host.

    • From the datastore tab, select Storage > New Datastore from the Actions menuadd_ds
    • Select the location and click Nextds_location-1
    • Select NFS for the type and click Nextds_type
    • Select NFS 3 for the NFS version and click Nextnfs_version
    • Enter a Datastore name, MDMS device share path and name, and hostname / IP address for the MDMS device and click Nextmdms_config_ds
    • Select the ESXi hosts that can access the datastore and click Nexthosts
    • Review the information and click Finishcomplete


    After a few moments, the new MDMS back Datastore location will be available for use by the EASi host.




  3. Move your VMs to the MDMS device

    In this scenario, we are going to cold migrate a VM to the IBM Cloud. This VMs to be migrated should first be cleanly shutdown in the manner that works best for the guest OS. In this case, we are migrating a single VM named WebServer via the vSphere Web Client.

    • With the VM selected, select migrate from the actions menumigrate
    • For migration type, select Change storage only and click Nextmigration_type
    • Select the MDMSDevice as the storage to use an click Nextmdms_storage_select
    • Review the selections and click Finishfinish_migrate

    The ESXi system will now relocate your VM and all associated files (disks, snapshots, etc.) to the MDMS Device.

    Repeat the above steps for each VM that was want to migrate to the IBM Cloud.

    The MDMS Device UI can be used to monitor network traffic and capacity. This should allow you to see when all migrations have completed.

  4. Disconnect VMs and MDMS Device from ESXi

    When all migrations to the MDMS Device have completed, you will need to disconnect VMs, disconnect the MDMS device from the VMWare environment, shutdown MDMS down, and ship it back.

    • For each VM on the MDMS Device, select it in the vSphere client and choose Remove from inventory.remove
    • Verify the VM Name is correct and click Yes
    • From the vSphere client, navigate to the Datastore tab, select the MDMSDevice datatore and Unmount Datastore from the Actions menuumount
    • Verify the information and click OK


  5. Shutdown and ship the MDMS device

    Once disconnected from the ESXi host, the standard MDMS shutdown process can take place.

    • Log into the MDMS UI
    • Select the hostname under the Storage System tab and choose Shutdown Storage Systemshutdown_mdms
    • Select OK on the dialog box
    • The system will now be shut down.

    Additional details can be found in the MDMS User Guide. The device is now ready to be packed up and shipped to the IBM Cloud (Softlayer) Data Center. The shipping label is included. 

    Once the device arrives at the data center, the data will automatically be offloaded to the IBM Cloud Object Storage (COS) bucket that was specified when the device was ordered. Depending on the amount of data loaded, this can take up to two days after the device arrives at the data center. The next step can be completed while you wait for the data to be made available in COS.

  6. Configure rclone access to COS

    VMWare ESXi will not natively load data from a COS bucket. Therefore, we need to use an intermediary to copy the data from the COS bucket to the target VMWare ESXi host. In this case, we are going to use rclone running on a VM to copy data from COS to the ESXi server in the IBM Cloud. The rclone tool supports a number of different data sources, including IBM Cloud Object Storage and sftp. We will use the s3/COS support to access COS and sftp to access the ESXi host. The data flow looks like the image below. The data does not need to be stored on the system running rclone, though temporary files and caches can be created.

    WARNING: You cannot use this method to migrate directly to vSAN storage. This method only works with an NFS datastore. See below for more details.



    On your VM you are using to move the data, download and install rclone.

    Configure rclone for access to COS

    Rclone provides a configuration wizard which can be run using the command: rclone config. Follow these steps to configure rclone to access COS. The resulting configuration will be stored in ~/.config/rclone/rclone.conf. It will look something like this:

    [COS]type = s3
    provider = IBMCOS
    env_auth = false
    access_key_id = <your access key>
    secret_access_key = <your secret key>
    region =
    endpoint = s3.us-south.objectstorage.service.networklayer.com
    location_constraint = us-south-standard
    acl = private


    Alternatively, you can create this file and paste in the above values being sure to update the access_key_id, secret_access_key, endpoint, and location_constraint. You can verify the connection by listing the buckets in your account using the following command (including the colon):

    rclone lsd COS:

    # rclone lsd COS:
    -1 2018-06-11 15:29:45 -1 mdms-ap-cr-iaas
    -1 2018-06-11 15:32:36 -1 mdms-che01-iaas
    -1 2018-06-11 15:29:30 -1 mdms-eu-cr-iaas
    -1 2018-06-11 15:31:11 -1 mdms-eu-de-iaas
    -1 2018-06-11 15:30:44 -1 mdms-eu-gb-iaas
    -1 2018-06-11 15:31:28 -1 mdms-mel01-iaas
    -1 2018-06-11 15:32:20 -1 mdms-tor01-iaas
    -1 2018-06-11 15:28:42 -1 mdms-us-cr-iaas
    -1 2018-06-11 15:30:27 -1 mdms-us-east-iaas
    -1 2018-06-11 15:30:13 -1 mdms-us-south-iaas
    -1 2018-04-13 11:22:15 -1 mpc-mdms-test-east

    You can also list the contents of your bucket (mdms-us-source-iaas in our case) using this command

    # rclone ls COS:mdms-us-south-iaas
    323 migrator-3537b76c.hlog
    53687091200 migrator-flat.vmdk
    8684 migrator.nvram
    574 migrator.vmdk
    0 migrator.vmsd
    2986 migrator.vmx


    Note: The sftp method only works when uploading to an NFS based datastore. If you are using vSan the SFTP connection will fail. If you only have vSAN storage, you should use a File store mounted via NFS as a work around.

  7. Configure rclone access to ESXi

    Both ESXi and rclone support using certificates to access the ESXi host via the ssh/sftp, which we will use in this example. Before rclone can connect to ESXi via sftp you will need to configure your certificates. Please follow the VMWare provided instructions for doing this. If you prefer to use password authentication, you will need to make sure you have “PasswordAuthentication yes” set in /etc/ssh/sshd_config on the ESXi host.

    As with configuring rclone COS access, you can use the rclone config command to enable sftp access. Once saved, your rclone.conf file will contain a new section that will look something like this:

    [vmware_sftp]type = sftp
    host =
    user = root
    port =
    pass =
    key_file = /root/.ssh/id_rsa
    use_insecure_cipher = true
    disable_hashcheck = true

    Where host is the name or ip address of your ESXi host. Verify you can access your ESXi by running this command:

    # rclone ls vmware_sftp:/vmfs 

    This will list the files in the /vmfs file system. If you have a large file store, this can take some time. You can kill the process with CTRL+c.



  8. Copy the data

    At this point, you are ready to copy the data from COS to your ESXi host. If your data has not fully loaded into COS you can wait until it is complete or you can start now. PUTs in COS are atomic, so once an object can be listed it can be downloaded.

    You will need to get the absolute path to your datastore from the ESXi server. This is easily accomplished by sshing to the host. In this example, yu command to copy the data looks like this:

    rclone copy COS:mdms-us-south-iaas vmware_sftp:/vmfs/volumes/5b0fefa9-d3cad756-b8e9-ac1f6b0e0da8/buckets/mdms-us-south-iaas

    or more generally:

    rclone copy <COS_alias>:<COS_bucket> <ESXi_alais>:<path_to_datastore>

    If you want more verbose output, add the -vvv switch to the rclone copy command.

    Depending on the speed of your network connection and the amount of data to transfer, this can take a few minutes to several hours.

    The rclone copy command will do a check before copy and not recopy files that already exist at the target location. This allows you to stop and restart a copy without having to recopy all of the files. Check out the rclone copy help for more potential optimizations.

  9. Import VMs

    Once the data copy has completed, you can import the VMs on the new ESXi host. Since we have copied the data to the ESXi host, you can use the Register VM option. In the vSphere client:

    • Select the datastore and choose Register VM from the Action menuregister_vm
    • Browse to the path where the VM images were copied and select the vmx file for the VM and click OK.select_vm-1
    • On the Register Virtual Machine screen, you can rename the VM and choose where to deploy the VM. Click Nextregister_vm_1
    • Select the host and click Nextselect_host
    • Review the settings and click finishreview_host


    Your virtual machine has not been registered in the new ESXi host. You can now adjust any VM settings, network connections, etc. and start your migrated VM in the IBM Cloud.

  10. A word about vSAN

    If your ultimate destination for your VMWare image is vSAN, you can follow the instructions above to move the data to an NFS datastore. Once the VM image is imported, you can use vMotion to migrate the image from the NFS datastore to vSAN.

    If your VMWare environment does not have an NFS datastore, you can order File storage from the control portal as a temporary NFS datastore. 

1 comment on"Cold migration of VMWare guests via MDMS"

  1. Thanks for sharing this article Pierre.

Join The Discussion