Can performance of cloud providers be measured in a meaningful, reproducible, and a standard way? Which cloud provider is better in terms of performance?
Up until now, it has been difficult to answer these questions. There are several reasons for that:
1. Reproducibility in measuring cloud performance
Often, one cloud provider will choose to show case the performance of their cloud using a particular benchmark, but it may not be possible to reproduce the results because the full details of the measurement were either not made public or they were not peer reviewed.
2. What does performance in cloud really mean?
Is it the ability to provision resources as they are needed as quickly as possible? Or is it run-time performance of an application running in multiple instances, that in turn puts load on CPU, memory, disk, and network of a cloud? Both provisioning and run-time performance are important for applications that need to scale in cloud.
3. Which cloud type to measure performance for?
There are different types of cloud offerings, such as infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), each requiring different types of performance tests. Even the IaaS clouds can be in different flavors, such as public or private.
4. Which metrics should be devised to measure cloud performance?
Cloud is ‘scalable’ and ‘elastic’ by definition, yet how can these qualities of a cloud be precisely measured?
The Cloud subcommittee in Standard Performance Evaluation Corporation (SPEC), has developed SPEC Cloud(TM) IaaS 2016, the first industry standard benchmark to measure performance of infrastructure-as-a-service (IaaS) clouds, both public or private. The benchmark had input from who’s who in the cloud space, from cloud providers large and small to software and hardware vendors.
The benchmark uses YCSB/Cassandra and KMeans/Hadoop workload in a multi-instance configuration. Over a time period, the benchmark creates a number of YCSB/Cassandra and KMeans/Hadoop clusters that perform a fixed amount of statistically similar work. The benchmark terminates if quality of service is violated or the number of application clusters as set by the cloud provider is reached.
The SPEC Cloud(TM) IaaS 2016 benchmark reports performance based on three primary metrics:
- Scalability â€” measures the total amount of work performed by application instances running in a cloud.
- Elasticity â€” measures whether the work performed by application instances scales linearly in a cloud.
- Mean instance provisioning time â€” measures the average time taken to provision instances from initial request to getting ready to accept connections.
The SPEC Cloud(TM) IaaS 2016 benchmark gives users the flexibility to configure the IaaS cloud under test using various combinations of physical nodes, virtual machines, and/or containers with appropriate storage and networking. The benchmark also supports multi-tenancy.
Here are the relevant links for more information: