Skill Level: Any Skill Level

Understanding of Node.js and some of the concepts around Object Storage

S3 provides a feature (Signed URLs) that enables an easy but powerful feature for sharing your Object Storage with end users without sharing your access credentials. In this recipe, we will demonstrate a simple scenario that you can use in your apps.


For users that are new to concepts around Object Storage, we suggest you first read the recipe Cloud Object Storage - S3 API Intro that predates this article. For more advanced users, you will need to have access to Node.js, a text editor and the ability to run a few Npm commands to implement this recipe. We plan to extend the Node application that was used in the original article and will repeat some of the sections to streamline this recipe.


  1. Simple scenario for Pre-Signed URLs

    For this recipe, we will describe a scenario demonstrating the value of temporary URLs for advertising resources stored in your Object Storage.    

    Our team has a typical microservices architecture that comprises a set of lightweight node applications all interacting with each other over messaging.   In one of our flows, there is a requirement to transfer large binary objects from a client device and persist this object into Object Storage.     As our Object Storage is used for persisting a variety of objects, we wanted to limit which applications have access to this persistence tier and more granually limit the scope of access to specific buckets down to the key level.

    In addition, transferring these large objects across our network and then within our cloud native architecture using messaging was considered a huge performance concern as each network hop would require the transfer of data across each of the various microservices.

    After doing some investigation, we found a feature called Presigned-URLs that allowed us to generate limited life access tokens to specific keys in our Object Storage bucket.      As the scope of these URLs were limited and had an expiration associated with them, we could provide these URLs to the client side applications leveraging a simplified scoping mechanism and a well defined expiration model.    This feature also allowed our team to eliminate some of the network bottlenecks that may occur in a microservices architecture and alllowed us to create a simplier request flow to Object Storage. 

  2. Creating an instance of S3

    As we learned in the previous recipe, the first step to interacting with S3 is to instantiate an instance of S3 and get access to their core client side APIs.    In this recipe, we have added a new dependency called request that we will be used later in the recipe for demonstrating how to interact with the Presigned-URL.

    var s3 = require('s3');

    var request = require('request');

    var client = s3.createClient({
    maxAsyncS3: 20, // this is the default
    s3RetryCount: 3, // this is the default
    s3RetryDelay: 1000, // this is the default
    multipartUploadThreshold: 20971520, // this is the default (20 MB)
    multipartUploadSize: 15728640, // this is the default (15 MB)
    s3Options: {
    accessKeyId: "*************",
    secretAccessKey: "*************",
    region: "us-standard",
    endpoint: 's3-api.us-geo.objectstorage.softlayer.net',
    sslEnabled: true
    // any other options are passed to new AWS.S3()
    // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-property

    var bucketName = 'todkap';


    Upon the completion of this step, we have created a connection to Object Storage and have defined our bucketName that we will use for the remainder of the recipe.    

    Author note: The creation of the bucket is outside of the scope of this article.  If you want to learn more about the API for creating a bucket and the various CRUD operations around Object Storage please refer to the article referenced in the prerequisites section of this recipe.

  3. Creating a Temporary URL

    In S3, the concept of temporary URLs are exposed via the API call getSignedURL.    In the API, there are a variety of ways to interact with this API in either synchronous or asynchronous form.    In an effort to embrace more of the async nature of Node.js, we will focus on the asynchronous API and dive a little into the parameters that we used for this recipe.

    In the function call below, the API provides a params object that can be leveraged to customize what interactions can be done with Object Storage.

     var params = {Bucket: bucketName , Key: 'tempURLItem', Expires: 600, ContentType: 'application/json'};
    var url = clientS3.getSignedUrl('putObject', params, function (err, url) {
    console.log('The URL is', url);
    // .... will use this URL in next step in the recipe.


    In this API, S3 requires the user to define both a Bucket and the Key that they want to interact with.  As we know that we are going to PUT a JSON object into Object Storage, we also define the ContentType of the object and indicate that we plans to use this URL only for putting the object (not retrieving it).   This will be part of the set of attributes that will be signed by Object Storage when the the temporary URL will be generated.   To restrict the lifetime of the temporary URL, we set the expiration to 600 seconds (10 minutes).   For our scenario, we should probably limit this to about 1 minute but that is outside of the scope of this example at this time.

  4. Using a Temporary URL

    Now that we have generated a temporary URL, we will now demonstrate how to use this URL to PUT data into Object Storage.   As you may recall from previous steps, the S3 SDK handles the security token propagation and handshaking required to interact with Object Storage.   In this flow, the initial security flow was handled when the temporary URL was generated.   Once the URL has been generated with its own custom authorization key, the client application is no longer required to provide credentials.     For illustration purposes, the URL that was generated looks similar to



    url: url,
    method: 'PUT',
    headers:{ 'Content-Type' : 'application/json'},
    body: "{ email: 'todkap@us.ibm.com'}"
    }, function(error, response, body){
    console.log('Logging error from tmp url post', error);
    console.log('Logging response from tmp url post', body);


    At this point in time, we can validate the item has sucessfully been uploaded to Object Storage and is accessible using the standard S3 APIs.    

  5. Conclusion

    In this recipe, we showed a simple but extremely useful API for generating temporary URLs that client applications can leverage to upload (and if desired download) protected Object Store resources while maintaining a limited set of access rights to these resources.    While the focus of this article is on one simple scenario, we are sure that developers can leverage this concept in a variety of their application flows.   We hope this article provides enough of a background to encourage you to explore this API in more detail in the future!

  6. Connect with Todd on:

    LinkedIn: https://www.linkedin.com/in/toddkaplinger/

    Twitter: @todkap

1 comment on"IBM Cloud Object Storage - Uploading Objects Using Pre-Signed URLs"

  1. […] Learn how to upload objects using pre-signed URLs […]

Join The Discussion