What this article will cover
This article does not describe the overall performance tuning approach from test design, to implementation, to report analysis. There is plenty of information already out there on the web if you are interested. (Have a look at the end of the entry – I’ve attached some links which I found personally interesting for this topic) Instead in this article tries to give you some general guidance when you are faced with the topic in an portal environment. If you are more interested on hands on guidance or if you are implementing actual performance test suites we got you covered in this article.
If you are in a hurry you can scroll down to the takeaway section. There you can find all information covered in a condensed form.
Hint: A good idea is to check the blog regularly – it might be that there will be a follow up to this post.
If you are still wondering why you should implement performance testing into your development process have a look at my blog post. But now let’s start with some general best-practices when you start your performance testing journey.
Try to setup an environment that comes close to your production environment
If you can’t setup a comparable environment for various reasons (budget etc.) use a scaled down environment – knowing that the uncertainty of results increases. And even if this approach is not feasible you still should setup performance tests for benchmarking in your regularly development workflow. Also note that DX can be used as an integration platform. Be aware that load tests will also stress the integrated systems and that the performance of those systems might have an impact on your test results.
You production environment consists of a load balancer, two web servers for static content, a portal cluster consisting of 4 nodes, a database server and an ldap server. Furthermore you are integrating various (10+) back-ends into your application.
Even with enough budget, time & resources setting up such an environment for performance test purposes would be cumbersome and hard to achieve. A more realistic setup could be, that you only use a single loadbalancer, a single web server and a standalone portal instance. All other systems will be reused from an existing staging environment (preferred the closest environment to prod). In such a scenario you will have to ensure that you limit external access to the environment you are using while performance tests are running. Furthermore if you share resources with other environments you will also share the network. Hence always be careful if you compare two benchmarks when you can’t assure that nobody else created high loads of traffic in that time. (This might have an impact on latency for example) Also consider possible caveats that might arise while setting up such an environment. For example if you want to test with 100.000 users but your ldap server in your environment only has 100 users for verification test purposes you have to spend additional effort to automate user generation and deletion after your tests.
Tune your portal for performance
Also think about tuning your Portal based on the input you get from the Portal Performance Tuning Guide. But remember that tuning the environment isn’t a one time task and then your done! Instead you should continuously improve the portal settings based on the results from your performance tests. Also note that Portal already provides you with an ConfigEngine task tune-initial-performance which you should use as a starting point.
Measure first then optimize
Don’t try optimizing your code or infrastructure before you started with your performance tests. There is always the caveat that developers will spend various ours ‘improving’ their code for better performance. But what if the real bottleneck of your application is somewhere else? Then this time ‘wasted’ could be spend fixing the real bottleneck. Therefore you should see performance testing as an iterative approach: First measure your application and then improve your code (& reconfigure your setting) for performance. If you’re done: Rinse and repeat
As soon as the first deployable version of your application exists you should start performance testing. (There is no value gained if you start your performance tests just before go-live – because then you might identify a problem but you don’t have the time fixing it!) Measure and create reports and identify performance bottlenecks, for example through thread and heap dump analysis or CPU profiling. (It’s a good habbit to involve the devlopment team in this step). Always give feedback to the development team (and possibly the operations team as well) based on your results. Based on this they start optimizing their code. After they are done optimizing adapt your performance tests and start again with measuring.
Think about repeatability of your tests. A necessity for this is an easy way to automatically setup and tune the portal environment. In the case of Portal you can leverage the possibility to export your page settings as xmlAccess files for the automatic setup of your portal pages. Furthermore you can use WasAdmin Scripts to set your tuning parameters. (If possible you can reuse those assets to set the tuning parameters for your other environments too) If you also want to test actions with side effects on your application make sure to be able to reset the state of the application, for example the database with user specific settings. If possible try to reuse the existing build automation from your development or operations team. You will be grateful if you have your automated setup in place, because you always should repeat the same test suite multiple time to ensure against random noise in your tests.
Monitor your infrastructure
Always monitor your infrastructure during performance test runs. Because otherwise you can’t identify bottlenecks based on your infrastructure (CPU load, I/O) and even worse you can’t rule out errors based on your test setup. For example if your load generator agents are utilizing the CPU 100% there is a high chance that there are hick-ups and delays in the measurement of request timings. If you don’t monitor your infrastructure you wouldn’t know about this fact. Hence you would come to the wrong conclusion that your application is not able to handle the expected used load instead of scaling-up the load generator agents. Luckily there are already a variety of tools (top, nmon, rstatd, Windows Performance Monitor, Tivoli Composite Application Manager for Application Diagnostics, CA APM, New relic) on the market which will provide you with those metrics (CPU, Memory Utilization, I/O). In the best case also try to integrate those numbers into your performance test tool for easy reporting.
For detailed analysis of your DX server you can use the build in Performance Monitoring Infrastructure provided through the WAS Integrated Solutions Console. There is also a great article on developer works using PMI for performance tuning if you are interested on how to leverage the PMI. Hence I will spare out more details about this topic in here.
Tip: Think about the data you want to monitor based on your different systems. Try to limit your measurements to only the parts needed.
Test your goals and beyond
Don’t be happy when you reached your desired performance targets during your performance test runs. Always try to identify the limits of your application under test. Think about the possibilities on how to reach the limit. For this you should leverage the different types of performance tests namely Load test, Stress test, Capacity test and Performance test. There is a great table with details and purpose of every test type on this page. Have look for details there. Beyond that it’s a good practice to work with plateaus (different amount of users over a specific time) to identify the best possible page throughput for your use case. If you working with plateaus for testing also make sure about a proper ramp up stage.
Use remote tests for high user loads
If you want to test your application for high user loads it might be impossible to simulate this large amount of load with a single machine. In such cases you should consider using additional machines to generate the load you need. (Don’t reuse machines from your test environment, otherwise you tamper the recorded stats like CPU utilization) For more details you can have a look at the official documentation for your testing tool. For example JMeter Remote Testing or Load Testing Web Applications with Rational Performance Tester
Involve all parties
Think about involving all required third parties into integration testing if possible. Depending on your level of integration those parts can have a huge impact on your application.
Implementing performance tests for portal doesn’t differ from any other web application if you know the parts you have to be aware off:
- Try to setup an environment that comes close to your production environment
- Tune your portal for performance before you start performance testing with the ConfigEngine task tune-initial-performance
- Measure first then optimize. This is an iterative approach.
- Repeatability. Make sure it’s easy and fast to set up your test environment.
- Monitor your infrastructure. Otherwise you can’t identify your bottlenecks properly.
- Test the limits not your desired goals. Always find the maximum – not what you think you need.
- Use remote tests for high user loads. Avoid load generators with more than 50% CPU utilization.
- Involve all parties required for performance testing. Possibly also third parties you depend on in your application.
- A little lession on why you should care about (proper) performance testing – https://wp.me/p4IIDT-Sl
- DX performance testing substitution patterns for reusable test suites – https://developer.ibm.com/digexp/docs/dxperftestsubstpatterns/
- Portal V8.5 Performance Tuning Guide – https://www-10.lotus.com/ldd/portalwiki.nsf/dx/IBM_WebSphere_Portal_V_8.5_Performance_Tuning_Guide
- Automatic Application of Portal Tuning Parameters – https://www.ibm.com/developerworks/community/blogs/portalops/entry/automatic_application_of_portal_tuning_parameters?lang=en
- Using PMI for performance testing – http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html
- How to Ramp Up in Steps in Your Load Tests – https://www.blazemeter.com/blog/how-ramp-steps-your-load-tests
- Key Types of Performance Testing – https://msdn.microsoft.com/en-us/library/bb924357.aspx
- Why performance testing in production is not only a best practice — it’s a necessity – https://www.soasta.com/blog/web-performance-testing-production/
- Performance Modeling and Analysis – http://researcher.watson.ibm.com/researcher/view_group_pubs.php?grp=150