Cloudbench is a multi-benchmark harness that automates Infrastructure as a Service (IaaS) cloud stress and scalability testing through the execution of controlled experiments. In this illustrative example, we use a pair of Application Instance Deployment Request Submitters (AIDRS) to keep generating new Cassandra/YCSB and Hadoop application instances (composed respectively by seven and six instances each), with a specific arrival rate for each AI type.

For instance, Cassandra/YCSB AIs will arrive with an inter-arrival time according to an exponential distribution with a mean of 600 seconds (with maximum and minimum values of 200 and 2000 seconds), while Hadoop application instances will arrive with an inter-arrival time according to a uniform distribution with values between 200 and 900 seconds. Also, the load level of the Hadoop AI is set to be according to a normal distribution with mean 5, standard deviation 2, and maximum and minimal values 1 and 9, while the lifetime is distributed according to a Gaussian with mean 7200, standard deviation 600, and maximum and minimum between 5000 and 8000 seconds. Finally, no more than 400 Cassandra/YCSB “Application Instances” should be created, and we should wait until the number of created AIs is equal to 1000 before stopping submitting requests for the creation of new ones, and then wait for 5 more hours.

expid exp5
patternshow simplecysb
patternalter simplecysb iait=exponentialI600IXI200I2000
patternalter simplecysb max_ais=400
patternshow simplecysb
patternshow simplehd
patternalter simplehd iait=uniformIXIXI200I900
typealter hadoop load_factor=2000000
typealter hadoop load_level=uniformIXIXI1I10
patternshow simplehd lifetime=normalI7200I600I5000I8000
patternshow simplehd
aidrsattach simplecysb
aidrsattach simplehd
waituntil AI ARRIVED=1000
aidrsdetach all
waituntil AI ARRIVING=0
ailist
waitfor 5h
aidetach all
monextract all

Through the deployment of multiple workloads at scale, a Cloud IaaS can be evaluated in its multiple aspects, from Control Plane/API/Management to Virtualization performance. CBTOOL currently automates over 65 workloads and workload profiles, including HPC, big data, transactional and synthetic workloads. CBTOOL currently can run against more than 10 different IaaS clouds (for example, Amazon EC2, Google Compute Engine, IBM Softlayer), allowing the definition of experiments using the aforementioned tool-specific abstractions which can then be executed against any cloud.

Image of how the CBTOOL works

CBTOOL has been used to run experiments with tens of thousands of individual instances, and uncovered stress-related bugs in multiple cloud subsystems that were not detected by any other testing methodology. On top of the workload deployment and performance management, CBTOOL also provides the ability to inject faults in key components on a given cloud, thus allowing experimenters to evaluate the performance and reliability (performability) under different demands.

Because of CBTOOL’s flexibility and expansibility (for example, its support for both new IaaS clouds and workload types is constantly expanded), this tool was selected as the core framework behind the recently released SPEC Cloud™ IaaS 2016, the first industry-standard cloud benchmark developed by Standard Performance Evaluation Corporation (SPEC) to measure the scalability and elasticity of IaaS clouds.

Join The Discussion

Your email address will not be published. Required fields are marked *