In Workshop 1 of this series, you created the application, bound a Data Cache service to it, and verified that it is running. In this workshop we will add some scaling capability to the application. In real life situations, web applications are often associated with business services that generate revenue for organizations. It is very critical that such applications scale up to the demand when needed. It is also equally important to have the application not consuming too many resources that are tied up to this application in anticipation of peak demand. At times where demand is not high, it will be very useful to have the ability to free up resources that may have been tied to applications that are not using them. This Auto Scaling service will allow you to establish such behavior through policies that you can customize for your application. Important: Before proceeding with this workshop, you will need to have completed Workshop 1 in this series. This workshop has the following sections. You need to complete the steps in sections Task 1 through Task 8 to complete this workshop. Arch-Ref-Diagram-Liberty-Lab2 Contents:

Task 1. Adding an Auto-Scaling service to your application

To get started with this task, follow the next steps:
  1. Go back to the Bluemix Dashboard tab and click on ADD A SERVICE OR API.
  2. Search for Auto-Scaling by typing the name or parts of it in the search field.
    Search for Auto-Scaling
    Search for Auto-Scaling
  3. Click on the Auto-Scaling service to open the details.
    Auto-Scaling service details
    Auto-Scaling service details
  4. Make sure that under App, you have the same Java application we created in this workshop selected: Bluemix-Java-CacheLab. Leave the other options as is and click CREATE.
  5. If you see a prompt for Restage click on RESTAGE.
Your new Auto-Scaling service will now be created and bound to your application.

Task 2. Creating and modifying auto-scaling polices

An auto-scaling policy allows you to create rules that will determine when the Auto-Scaling service will increase or decrease the number of instances of your application. In this task we will create and modify rules for our Java Liberty application.
  1. Within your application click on the Auto-Scaling service.
  2. Click CREATE AUTO-SCALING POLICY to get started. Enter the parameters for the policy as follows:
    • Enter the name.
    • Enter min and max.
    • Scroll down to the section for Rule 1. Leave the Metric Type set to Memory , but change the percentage for Scale Out to 30, and for Scale In to 15.
    • Expand the Advanced Configurations section and set the values as shown. Set a Statistic Window of 30 seconds and value of 60 seconds for Breach Duration and the Scaling Cooldown Periods. We’ll be using these smaller time frames for testing later.
    Note: These settings are much lower than what would typically be found in a production application. The values used are low enough to ensure the Auto-Scaling service properly scales your application within a shorter test window during progression of this Workshop.
    Scaling Rules
    Scaling Rules
  3. Click ADD A RULE to specify another rule. This time, set the Metric Type to CPU. Leave the other values as they were for the previous rule.
  4. Click SAVE to save your policy with the two rules.
    Save the auto-scaling policy definition
    Save the auto-scaling policy definition
Your application will now automatically scale based on the policies you put in place. We will see the effects in our next task when we start load testing our application.

Task 3. Adding Load Impact service and account creation

In the previous task, we incorporated the auto-scaling capabilities into our application. In this task, we will put that auto-scaling capability to the test. To test that, we need to create some load tests to stress the application so that our auto-scaling policies that were created in the previous step can be executed. To do that, the Load Impact service is a third party service that allows you to define such load and stress tests and simulate significant loads on our application. In this task, you will add a Load Impact service and bind it to our application. To get started with this task, follow the next steps:
  1. Go back to the Bluemix Dashboard tab and click on ADD A SERVICE OR API.
  2. Search for Load Impact by typing the name or parts of it in the search field.
  3. Click on the Load Impact service to open the details.
    Load Impact service details
    Load Impact service details
  4. Click CREATE to create and bind the new service to your space. This service instance can be used across multiple applications, so unlike some other services, it is not bound to a specific application.
  5. Click OPEN LOAD IMPACT DASHBOARD in the next screen.
  6. When creating your Load Impact account be sure to set the timezone and country for proper test results and click Save and continue . Important: Do not alter your First Name or Surname.
  7. Complete registration
    Complete registration
Your Bluemix space, containing your current application, is now tied to your new Load Impact account.

Task 4. Creating a user scenario

A Load Impact user scenario defines what actions a simulated user will take during the load tests of your application. Load Impact has some auto generated tests, but in order to achieve more realistic tests for your application it is important to create your own. In this task we will add our own user scenario to leverage in a load test later on. To get started with this task, follow the next steps:
  1. Once logged in to your Load Impact account, navigate to User scenarios and click Create user scenario. Here is where you will define what actions our simulated users will take during the Load Impact test.
    User scenarios
    User scenarios
  2. Name your user scenario, then copy and paste the test code we have predefined for you below in into the script box. Load Impact Sample Test
    -- Variable determines appropriate website URL to test.
    -- EDIT this VARIABLE to test your application.
    -- Ensure there is NOT a trailing slash at the end
    local WEBSITE_URL = "http://replace-me.mybluemix.net"
    -- User input we want to randomly generate
    local KEY1 = string.char(math.random(32, 126))
    local KEY2 = string.char(math.random(32, 126))
    local KEY3 = string.char(math.random(32, 126))
    local VAL1 = math.random(1, 31415926)
    local VAL2 = math.random(1, 31415926)
    local VAL3 = math.random(1, 31415926)
    -- Send PUT request to the app with random Key-Value pair
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY1 .. "&value=" .. VAL1 .. "&encrypt=false", auto_decompress=true}
    })
    -- Simulate cache refresh by fetching all values
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY1 .. "&value=" .. VAL1 .. "&encrypt=false", auto_decompress=true}
    })
    -- Send second PUT request to the app with random Key-Value pair 
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY2 .. "&value=" .. VAL2 .. "&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY2 .. "&value=" .. VAL2 .. "&encrypt=false", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY3 .. "&value=" .. VAL3 .. "&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY3 .. "&value=" .. VAL3 .. "&encrypt=false", auto_decompress=true}
    })
    -- Send GET request to fetch the Value from Key, then refresh the data cache
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY1 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY1 .. "&value=" .. VAL1 .. "&encrypt=false", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY2 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY2 .. "&value=" .. VAL2 .. "&encrypt=false", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY3 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY3 .. "&value=" .. VAL3 .. "&encrypt=false", auto_decompress=true}
    })
    -- Delete keys one through three, refreshing the cache after each request
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=delete&key=" .. KEY1 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=delete&key=" .. KEY2 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=delete&key=" .. KEY3 .. "&value=&encrypt=false", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=&value=&encrypt=false", auto_decompress=true}
    })
    -- Send an ENCRYPTED PUT request to the app with the random Key-Value pair
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY1 .. "&value=" .. VAL1 .. "&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY1 .. "&value=" .. VAL1 .. "&encrypt=true", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY2 .. "&value=" .. VAL2 .. "&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY2 .. "&value=" .. VAL2 .. "&encrypt=true", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=put&key=" .. KEY3 .. "&value=" .. VAL3 .. "&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY3 .. "&value=" .. VAL3 .. "&encrypt=true", auto_decompress=true}
    })
    -- Send GET request for ENCRYPTED Value from Key, then refresh the data cache
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY1 .. "&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY1 .. "&value=aJY1vAbJJA2tCx2HN8QoQA%3D%3D&encrypt=true", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY2 .. "&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY2 .. "&value=3o29aRTtQAPQBq1uW7zSkg%3D%3D&encrypt=true", auto_decompress=true}
    })
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=get&key=" .. KEY3 .. "&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=" .. KEY3 .. "&value=zQ7y%2Frb9pnRKfvxoPk%2B3cA%3D%3D&encrypt=true", auto_decompress=true}
    })
    -- Delete Keys one and two, leaving three. Refresh cache after each request
    http.request_batch({
    {"GET", WEBSITE_URL .. "/ecaas?operation=delete&key=" .. KEY1 .. "&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=delete&key=" .. KEY2 .. "&value=&encrypt=true", auto_decompress=true},
    {"GET", WEBSITE_URL .. "/ecaas?operation=all&key=&value=&encrypt=true", auto_decompress=true}
    })
    client.sleep(math.random(20,40))
    
    Tip: Since your web application has a different URL you will need to modify the code setting the variable WEBSITE_URL to your own applications Bluemix URL. Make sure your URL does not end with a /.
    Copy and paste the test into the script box
    Copy and paste the test into the script box
  3. Click the Save button to create your scenario.
The User Scenario we provided was created with the culmination of Load Impacts Chrome recorder extension and some custom LUA scripting. If you want more information on generating your own tests for other applications feel free to browse Load Impacts documentation as follows:

Task 5. Create and run a test configuration

A test configuration in Load Impact orchestrates how simulated users will act and behave during a Load Impact test. Here you determine testing factors such as how many users will be simulated, how long the test will run as well as number of servers used and which user scenarios will be executed throughout the test. To get started with this task, follow the next steps:
  1. Navigate to Test configurations and click Create test configuration to start creating our Load Impact test.
  2. Give your test configuration a name of your choice, and specify the Target URL for your application.
    Create test configuration parameters
    Create test configuration parameters
  3. Expand the Load test execution plan section.
  4. Increase the number of Virtual Users (VUs) to 80 over five minutes for a test that will stress our application enough to generate more instances.
    Increase the number of Virtual Users
    Increase the number of Virtual Users
  5. Expand the User scenarios section. Set the user scenario to the one you just created. You can leave the Load zone default.
    User scenarios configuration
    User scenarios configuration
  6. Scroll down and expand the Extra settings section. Set the Increase source IP(s) field to 3x.
  7. Click Create test configuration and start test. When prompted to confirm, click Run test.
    Click Create test configuration and start test
    Click Create test configuration and start test
  8. Your test will begin to run providing you with statistics for the test as time passes throughout the duration of your test (See test running below). You will delve further into test results in your next task.
    Test running
    Test running

Task 6. Load Impact test results and adding graphs

As the Load Impact test runs its course, data collected from load testing your application will be generated and saved. In this task you will access and view some of this information. To get started with this task, follow the next steps:
  1. Scroll down on the Load Test page for the test you just started, and find the Charts section. Load Impact generates graphs for your test as it runs. The first graph shows you the number of active clients over time, and the user load time. You can see if load time slows with increased user load.
    Graph for the test results
    Graph for the test results
  2. From the popup menu, select the Result pull-down to see what types of charts you can add. For example, you can add test failure rates over time.
    Test failure rates over time
    Test failure rates over time
Load Impact however provides only some of the server monitoring tools at our disposal given the services we have bound to our application. You will get more involved with these analysis tools in the next tasks.

Task 7. Auto-Scaling metric statistics and history

Now that you have finished the Load Impact test from your previous task, you can look further into the effects of our test on your Auto-Scaling policies rules we set in place during task 3. The metrics we will look at will show you how effectively your Auto-Scaling policy handled the heavy load of the Load Impact test. To get started with this task, follow the next steps:
  1. First you need to go back to the Bluemix Dashboard, and click to select the application.
  2. On the left hand navigation sidebar you will see a list of services. Click on Auto-Scaling.
    Click on Auto-Scaling
    Click on Auto-Scaling
  3. Click the Metric Statistics tab to see graphs on usage statistics over time for your Bluemix application. You can scroll down to see all the graphs. If you have set up any scaling rules for a specific property, you will see upper and lower threshold indicators that determine when your application will create or remove additional instances.
    Graphs on CPU usage statistics
    Graphs on CPU usage statistics
    Graphs on memory usage statistics
    Graphs on memory usage statistics
  4. Now if your application is stressed enough you can see the scaling polices you have defined will take into effect and depending on the conditions you can see the applications scale out and scale in. To check the scaling history click on the Scaling History tab.
    Scaling service and Scaling History
    Scaling service and Scaling History
  5. As auto-scaling creates or removes instances of your application you can see these changes reflected through your apps page from the Bluemix dashboard.
    Auto-Scaling creating or removing instances
    Auto-Scaling creating or removing instances

Task 8. Monitoring and analytics performance

The Monitoring and Analytics service provides Performance Monitoring resource metrics about your Bluemix Liberty Application. This part of the service allows you to identify potential issues with your application at runtime. To get started with this task, follow the next steps:
  1. On the left hand navigation sidebar you will see a list of services. Click on Monitoring and Analytics.
  2. Under the Performance Monitoring tab you will see multiple graphs depicting different performance related metrics:
    • The JVM CPU Usage is useful for identifying any unusual spikes in CPU usage.
    • The Java Heap Usage allows you to see if how much heap memory your application has been using and if it falls within an acceptable range.
    • You can also check your Bluemix application’s thread pool usage to see if it is meeting your expectations.
    • You are able to monitor how often and how long your application is running garbage collection and deduce what implications that may have on performance.
    Performance Monitoring tab
    Performance Monitoring tab

Still have questions?

Get expert help in the Bluemix forum.

1 Comment on "Actionable Architecture: Web application hosting with Auto-Scaling and Load Generation Services (Liberty Workshop 2)"

  1. Mike Curtin July 21, 2015

    For Task 2, Rule 2… instructions say to set Metric Type = CPU utilization, but this is not an option. Choices that I see are Memory, Response Time, Throughput, JVM Heap.

Join The Discussion

Your email address will not be published. Required fields are marked *