Skill Level: Any Skill Level

-Familiar with SoftLayer - Familiar with npm/Node

In this recipe, we will show how to do basic operations with Cloud Object Storage using the same S3 APIs as AWS Cloud Storage.


To get started, users will need to have an account on SoftLayer or will be asked to login in with their IBMid.  For an overview of what IBM Cloud Object Storage is and the capabilities, please vist the IBM Bluemix Catalog service page.


  1. Creating an instance of S3

    Cloud Object Storage is a key component of SoftLayer and is available via the SoftLayer catalog.   To get started, please go to the SoftLayer Customer Portal and login with the appropriate userid and password.   Once authenticated, you will be presented with a dashboard.   Using the menu, select the Storage heading and then Object Storage submenu to navigate to the Object Storage provisioning portal.   At this time, you will be presented with a dialog window as shown below.  Using the pull down menu to select the “Open Trial” and click the continue button.



     On the next screen, you will be asked to accept the service agreement before going to the next step as shown here.


    Once the provisioning process completes, you will be returned back to the main ObjectStorage portal page and should see the newly created S3 compatible version of Object Storage as follows.


    At this point, you have a provisioned instance and are able to start to play with the S3 APIs.

  2. Obtaining your S3 Credentials

    During the provisioning process, Cloud Object Storage creates a set of credentials specific to your tenant instance which can be used by your application to storage data using the S3 APIs.   As you can see in step1, there are two accounts that I have created as part of my testing.      For now, we are going to focus on the first account and will click on the first account name ‘SLOSC1152423-1’.    When this link is clicked, we are brought to the account dashboard for this instance where I can view credentials,  usage for the current billing cycle as well as go back to the main dashboard where all of our Object Storage accounts are managed.   


    For the S3 client that we are using, we will need the Access Key ID, Secret Access Key as well as the public endpoint.   Note that the dashboard allows you to manage these credentials in the case you want to expire old credentials, generate new credentials or manage multiple credentials to this account.

    Author note:   One of the great new features of Cloud Object Storage is the ability to replicate your Object Storage to multiple regions.   We will not be focusing on this feature in this recipe but if your scenarios require highly available data stores that can be used for HA/DR scenarios, please continue to monitor this space as new announcements occur.

  3. Creating your first S3 client

    For the purpose of this recipe, we decided to use Node.js and the S3 npm module for interacting with ObjectStorage.  There are many client SDKs available in a variety of programming languages.   For our use cases, we found this library to be the simplest to get started with and also provided access to the AWS.S3 APIs.  


    var s3 = require('s3');

    var client = s3.createClient({
    maxAsyncS3: 20, // this is the default
    s3RetryCount: 3, // this is the default
    s3RetryDelay: 1000, // this is the default
    multipartUploadThreshold: 20971520, // this is the default (20 MB)
    multipartUploadSize: 15728640, // this is the default (15 MB)
    s3Options: {
    accessKeyId: "******",
    secretAccessKey: "*****",
    region: "us-standard",
    endpoint: 's3-api.us-geo.objectstorage.softlayer.net',
    sslEnabled: true
    // any other options are passed to new AWS.S3()
    // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-property

    var clientS3 = client.s3; // Provides direct access to AWS.S3 APIs

    The key parameters when we instantiate this object are the accessKeyId, secretAccessKey, endpoint and the region.   The first three items were obtained in step2 and the region was picked up from the product documentation.  Once the object has been instantiated,  we have access to two objects.  The first is the client that leveraged the main constructor for S3 and will be used later on for doing file uploads.  The second is an instance of S3 that is available on the main client instance that will allow us to do more native S3 operations such as managing the creation and deletion of buckets.

    Author note: The API docs on npmjs explicitly states that it is expected and commonplace for both of these libraries to be leveraged in concert with each other.  More on the usage below.
    This contains a reference to the aws-sdk module. It is a valid use case to use both this module and the lower level aws-sdk module in tandem.

  4. Creating a bucket

    S3 has a container model for storing artifacts called buckets.   S3 is relatively strict in how to approach buckets and provides a link to describe these restrictions in terms of their naming rules.   For the purpose of this recipe, we are going to create our own bucket and following their convention, the name of the bucket will be completely in lower case letters.


    var bucketName = 'todkap';
    clientS3.createBucket({Bucket: bucketName}, function(err) {
    console.log("checking for error on createBucket " + bucketName, err);


    In this code snippet, we are creating a bucket named todkap.  In the callback, we check to see whether there is an error returned and log the error.    Since this is only for testing purposes, we will manually check the console log to see whether the API call returned a success or not.

    At this point in time, we can now verify my bucket has been created.   In the Object Storage dashboard, we can now see that the bucket todkap was indeed created.


  5. Storing Objects

    Now that we have a bucket created named todkap, we can now start to store objects in Object Storage.   Object Storage has the ability to store both objects and files in a given bucket.     In this code snippet below, we will show how to store a string under mykey and also store a file object called ttemp.txt using their file upload  API.

    var data = {Bucket: bucketName, Key: 'mykey', Body: 'Hello!'};
    clientS3.putObject(data, function(err, data) {
    if (err) {
    console.log("Error uploading data: ", err);
    } else {
    console.log("Successfully uploaded mykey to " + bucketName);
    var params = {
    localFile: "./TryToEditMePlease.txt",
    s3Params: {
    Bucket: bucketName,
    Key: "ttemp.txt"
    // other options supported by putObject, except Body and ContentLength.
    // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
    var uploader = client.uploadFile(params);
    uploader.on('error', function(err) {
    console.error("unable to upload:", err.stack);
    uploader.on('progress', function() {
    console.log("progress", uploader.progressMd5Amount,
    uploader.progressAmount, uploader.progressTotal);
    uploader.on('end', function() {
    console.log("done uploading TryToEditMePlease.txt to " + bucketName);

    Once this code has executed, we can go back to the Object Storage dashboard to verify things have been uploaded.


  6. Listing Objects

    There are often times that users will want to programmatically list the objects in Object Storage.  This will be key for a variety of use cases including our last step in the recipe when we demonstrate how to delete objects.   S3 provides a simple API for listing objects that exist in the bucket.   Below is the syntax for listing all of the items that are currently in the todkap bucket.

    clientS3.listObjects({Bucket : bucketName}, function(err, data) {
    console.log("listing " + bucketName, [err, JSON.stringify(data)]);
  7. Deletion of Objects and Containers

    To demonstrate the full life cycle of Object Storage scenarios, we have walked through the various CRUD operations with the exception of DELETE.   In this section we will show how to build the list of items to delete using the listObjects API from the previous section.   Once we have the complete list of items to delete, we will then remove each of the items from the bucket.   At this point in time, we will be able to delete the bucket (S3 requires the bucket to be empty before removing the bucket).

     var itemsToDelete = Array();
    data.Contents.forEach(function(currentItem) {
    console.log('item 1', currentItem.Key);
    itemsToDelete.push ({ Key : currentItem.Key });

    console.log('items to delete', itemsToDelete);

    clientS3.deleteObjects({Bucket: bucketName,
    Delete: { /* required */
    Objects: /* required */
    Quiet: false
    , function(err, data) {
    console.log("deleting items from " + bucketName, [err, JSON.stringify(data)]);
    clientS3.deleteBucket({Bucket: bucketName}, function(err, data) {
    console.log("deleting " + bucketName, [err, JSON.stringify(data)]);

    At this point, we can now go back to the Object Storage dashboard and verify the bucket no longer exists and of course has no content in it.



  8. Conclusion

    We have now walked you through the basic operations that S3 provides for storing data in the cloud.   Having been a long time Object Storage user, the ability to layer S3 compatible APIs on top of Object Storage is a welcome addition.    We hope this simple recipe demonstrates how easy it is to consume S3 in Bluemix.

  9. Connect with Todd on:

    LinkedIn: https://www.linkedin.com/in/toddkaplinger/

    Twitter: @todkap

1 comment on"IBM Cloud Object Storage - S3 API Intro"

  1. […] Learn how to do basic operations with Cloud Object Storage using S3 APIs […]

Join The Discussion