page-brochureware.php

IBM Power Systems

Big Data and Analytics performance proof-points

Faster time-to-value for big data. Discover how Hadoop innovation can deliver faster, more affordable business insights.

Apache Spark SQL TPC-DS 3 TB four-stream throughput on IBM POWER9

For the systems and workload compared:

  • IBM® Power® System LC922 server delivers superior performance running multiple TPC-DS query streams with Apache Spark SQL
  • It delivers 1.30x the query results per hour running on IBM POWER9™ compared to Intel Xeon SP Gold 6140, which is 1.59x better in price and performance
Apache Spark SQL TPC-DS 3 TB four-stream throughput on IBM POWER9

System configuration

Power System Competitor
Hardware Four nodes of IBM Power LC922 (two 20-core/2.7 GHz/512 GB memory) using twelve 8 TB HDD, 10 GbE two-port, RHEL 7.5 LE for IBM POWER9 Four nodes of Intel Xeon Gold 6140 36 cores (2 x 18c chips) at 2.3 GHz; 512 GB memory, twelve 8 TB HDDs, 10 Gbps NIC, Red Hat Enterprise Linux 7.5
Software Apache Spark 2.3.0 located at http://spark.apache.org/downloads.html; and open source Hadoop 2.7.5 Apache Spark 2.3.0 located at http://spark.apache.org/downloads.html; and open source Hadoop 2.7.5

Notes:

  • Results are based IBM internal measurements running four concurrent streams of 99 distinct and diverse queries of varying complexity and length against a 3 TB data set. Results are valid as of 4/25/18 and tests were conducted under laboratory condition with speculative execution controls to mitigate user-to-kernel and user-to-user side-channel attacks on both systems. Individual results can vary based on workload size, use of storage subsystems, and other conditions.

IBM Db2 Warehouse running on IBM Power System L922 reduces software costs with superior core performance versus tested x86 systems

For the systems and workload compared:

  • The IBM® Power® System L922 costs less than the Intel 8168.
  • The Power L922 runs 2.54x more queries per hour per core than the Intel 8168.
  • The Power L922 cluster provides 2.44x better price performance than the Intel 8168 cluster.
  • The Power L922 solution enables 57% lower solution costs than using the Intel 8168.


IBM Power L922
(20-core, 512 GB)
Intel Xeon SP based 2-socket server
(48-core, 512 GB)
QpH 1
Total queries per hour
3064 QpH 2891 QpH
Server price 2 , 3 , 4
3-year warranty
$37,222 $52,330
Solution Cost 5
(three nodes)
Server + RHEL OS + Virtualization + Db2 @ $12,800* per core
$817,299
per node: ($13,341 + $12,077 + $256,000*)
$1,899,449
per node: ($30,126 + $3,919 + $614,400*)
QpH per $1000 3.74 QpH/$1000 1.53 QpH/$1000



System configuration

Power System Competitor
3x Power L922 servers with 20-cores and 512 GB RAM 3x Intel 8168 servers with 48-cores and 512 GB RAM

Notes:

  1. The results are based on IBM internal testing of IBM Db2® Warehouse running a sample analytic workload of 30 distinct queries of varying complexities (intermediate and complex). The results are valid as of 3/14/18 and are conducted under laboratory condition with speculative execution controls to mitigate user-to-kernel and user-to-user side-channel attacks on both systems. Individual results can vary based on workload size, use of storage subsystems, and other conditions.
  2. 3x IBM Power L922 (2x 10-core/2.9 GHz/512 GB memory) using two 300 GB SATA 7.2K rpm LFF HDD, 1x GbE two-port, 10 GbE two-port, 1x 16 Gbps FCA running DB2 Warehouse 2.5 and IBM Spectrum Scale™ 4.2 with RHEL 7.4.
  3. Competitive stack: 3x 2-socket Intel Xeon Scalable Processor (Skylake-SP) Platinum 8168 (2x 24-core/2.4 GHz/512 GB memory) using 2x 300 GB SATA 7.2K rpm LFF HDD, 1 Gb two-port, 10 GbE two-port, 1x 16 Gbps FCA, running Db2 Warehouse 2.5 and Spectrum Scale 4.2 with, RHEL 7.4
  4. Pricing is based on Power L922 http://www-03.ibm.com/systems/power/hardware/linux-lc.html and typical industry standard x86 pricing https://www.synnexcorp.com/us/govsolv/pricing/
  5. Db2 Warehouse pricing is based on USD regional perpetual license costs where certain discounts can apply.

DSX Local on IBM POWER8 running K-means Machine Learning workload

For the systems and workload compared:

  • Superior scalability and response time
    • 2.5x performance advantage scaling from 1 GB to 15 GB
    • More than 2x performance advantage scaling from one to four users



System configuration

Power System Competitor
Power S822 LC for HPC – 20 cores, 512 GB RAM, and 1 SSD for DSX data Broadwell E5 2640 v4 – 20 cores, 512 GB RAM, and 1 SSD for DSX data
Power S822LC for Big Data – 20 cores, 512 GB RAM, and 10 HDDs for DSX data Broadwell E5 2699 v4 – 44 cores, 512 GB RAM, and 10 HDD for DSX data

Notes:

  • Test results are based on running a machine learning workload based on k-means clustering algorithm on data sets of size ranging from 1 GB to 15 GB in DSX Local notebooks. The K-means clustering algorithm is implemented in Python using Tensorflow APIs that is shipped with DSX.
  • The performance measurement is based on the execution time of the k-means clustering algorithm in the session. That is, the start timestamp is collected right after the variable initialization in the session and before any iteration of model building and the stop time is collected right after the loop of forming five clusters with 100 iterations is completed.
  • A series of tests were performed during the month of December 2017.

Realize 1.8X application performance gains with Kinetica on IBM POWER9™ with NVLink 2.0 versus Kinetica on IBM POWER8™

For the systems and workload compared:

  • Improved application performance with Kinetica filtering Twitter Tweets
  • 80% more throughput on IBM Power System AC922 than IBM Power System S822LC for HPC

System configuration

IBM Power System AC922 IBM Power System S822LC for HPC
40 cores (2 x 20c chips) 20 cores (2 x 10c chips) / 160 threads
POWER9 with NVLink 2.0 POWER8 with NVLink
2.25 GHz, 1024 GB memory 2.86 GHz, 1024 GB memory
(4) Tesla V100 GPUs (4) Tesla P100 GPUs
(2) 6 Gb SSDs, 2-port 10 Gb Ethernet
Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) running Kinetica 6.1 (Red Hat Enterprise Linux 7.4 for POWER8 running Kinetica 6.1

Notes:

  • Throughput results are based on running Kinetica “Filter by geographic area” queries on data set of 280 million simulated Tweets with 80 to 600 concurrent clients each with 0 think time.
  • Test date: November 27, 2017

Kinetica on IBM Power System S822LC for HPC

For the systems and workload compared:

  • Accelerate the performance of Kinetica with 2.5X better performance than x86 accelerated solutions
    • IBM Power System S822LC with 4 Tesla P100s: 188,852 queries per hour
    • Xeon E5-2640 v4 system with 4 Tesla K80s: 73,320 queries per hour
Kinetica on POWER8

System configuration

Power S822LC for HPC Xeon E5-2640 v4 Competitor
20-cores 20-cores
(2) IBM POWER8 with NVLink, 2.86 Ghz, 20-cores, 160 threads (2) Xeon E5-2640 v4 @ 2.40GHz, 20-cores
1024 GB memory 512 GB memory
(3) 3.84 TB 2.5″ 6 Gbps SSD (2) 800 GB Intel SSD DC S3510 Series 2.5″ 6 Gb SSD
(4) NVIDIA Tesla P100 with NVLink (GPU) (4) NVIDIA Tesla K80 (GPU)
NVLink PCIe Gen3
Ubuntu 16.04.1 LTS Ubuntu 16.04 LTS
CUDA 8.0 CUDA 8.0

Notes:

  • Date of testing: 8/12/2016
  • All results are based on running Kinetica “Filter by geographic area” queries on data set of 280 million simulated Tweets with 1 up to 80 simultaneous query streams each with 0 think time.

Spark MLlib Logistic Regression 100 GB (in-mem)

For the systems and workload compared:

  • 2.16X relative price performance advantage for IBM Power® System S812LC cluster compared to the x86 cluster.
  • 2.55X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Mllib Logistic Regression 100 GB (in-mem) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 256 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 7 6 Gbps RAID Adapter (read_ahead_kb = 128) 5 x 6 TB HDD for Hadoop Distributed File System (HDFS) and Spark I/O 12-core, 3.2 GHz, 256 GB, RHEL 7.1, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter (read_ahead=4096) 5 x 1 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark MLlib Logistic Regression 1 TB (in-mem)

For the systems and workload compared:

  • 1.479X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.74X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Mllib Logistic Regression 1 TB (in-mem) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 256 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 7 6 Gbps RAID Adapter (read_ahead_kb = 128) 5 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 256 GB, RHEL 7.1, SMT2, Storage: Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter (read_ahead=4096) 5 x 1 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark MLlib Logistic Regression 1 TB (50/50)

For the systems and workload compared:

  • 1.1475X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.35X relative system performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Mllib Logistic Regression 1 TB (50/50) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 256 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 7 6 Gbps RAID Adapter (read_ahead_kb = 128) 5 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, Memory: 256 GB, RHEL 7.1, SMT2, Storage: Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter (read_ahead=4096) 5 x 1 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark MLlib Logistic Regression 10 TB (50/50)

For the systems and workload compared:

  • 1.156X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.36X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Mllib Logistic Regression 10 TB (50/50) graph

System configuration

Power S812LC (13-node cluster) Xeon E5-2620 (13-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 8 12 Gbps RAID Adapter (read_ahead_kb = 4096) 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter (read_ahead=4096) 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark SQL 1 TB (5 sequential query)

For the systems and workload compared:

  • 1.4705X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.73X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark SQL 1 TB (5 sequential query) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT4, Storage: Microsemi Series 8 12 Gbps RAID Adapter 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark SQL 2 TB (5 sequential query)

For the systems and workload compared:

  • 1.36X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.60X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark SQL 2 TB (5 sequential query) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT4, Storage: Microsemi Series 8 12 Gbps RAID Adapter 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark MLlib SVM (Support Vector Machine)

For the systems and workload compared:

  • 1.25X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.47X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Mllib SVM (Support Vector Machine) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT8 Storage: Microsemi Series 8 12 Gbps RAID Adapter 5 x 6 TB HDD for HDFS and Spark I/O 12-cores, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 5 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark MLlib MF (Matrix Factorization)

For the systems and workload compared:

  • 1.19X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.40X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark MLlib MF (Matrix Factorization) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT4, Storage: Microsemi Series 8 12 Gbps RAID Adapter 5 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 5 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark GraphX SVD++ (Singular Value Decomposition++)

For the systems and workload compared:

  • 1.955X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 2.3X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark GraphX SVD++ (Singular Value Decomposition++) graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 8 12 Gbps RAID Adapter 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark GraphX Triangle Count

For the systems and workload compared:

  • 1.581X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.86X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark GraphX Triangle Count graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 8 12 Gbps RAID Adapter 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark GraphX PageRank

For the systems and workload compared:

  • 1.13X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.33X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark GraphX PageRank graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 512 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 8 12 Gbps RAID Adapter 10 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 512 GB, RHEL 7.2, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter 10 x 6 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

Spark Overall Average

For the systems and workload compared:

  • 1.445X relative price performance advantage for Power S812LC cluster compared to the x86 cluster.
  • 1.7X relative systems performance advantage for Power S812LC cluster compared to the x86 cluster.
Spark Overall Average graph

System configuration

Power S812LC (seven-node cluster) Xeon E5-2620 (seven-node cluster)
Hardware and OS Hardware and OS
10-core, 3.53 GHz, 256 GB, RHEL 7.2, SMT8, Storage: Microsemi Series 7 6 Gbps RAID Adapter (read_ahead_kb = 128) 5 x 6 TB HDD for HDFS and Spark I/O 12-core, 3.2 GHz, 256 GB, RHEL 7.1, SMT2, Storage: LSI 12 Gbps SAS 3108 RAID-on-Chip Adapter (read_ahead=4096) 5 x 1 TB HDD for HDFS and Spark I/O
Network Network
Public: 1 Gbps, Private: 10 Gbps Public: 10 Gbps, Private: 10 Gbps
Software Software
OpenJDK 8 – build 1.8.0_60-b27, HDFS Version 2.7.1, Spark Version 1.5.1 OpenJDK 8 – build 1.8.0_65-b17, HDFS Version 2.7.1, Spark Version 1.5.1

Notes:

  • The configurations above apply to the Logistic Regression benchmarks (Input Data: 100 GB and 1 TB).
  • All other benchmark runs used the Microsemi Series 8 12 Gbps RAID Adapter (read_ahead_kb = 4096) for the Power S812LC cluster and 512 GB of memory for both the Power S812LC and x86 systems.
  • All other benchmark runs used 10 x 6 TB HDD for HDFS and Spark I/O for both the Power S812LC and x86 systems. The exceptions were matrix factorization, which used 10 x 6 TB HDD for Spark I/O and 5 x 6 TB for HDFS and SVM, which used 5 x 6 TB HDD for HDFS and Spark I/O.
  • For logistic regression and the 10 GB input data with 50% in-memory case, 12 data nodes were used for both the Power S812LC and x86 clusters.
  • Spark benchmark workloads that were run belong to three categories: machine learning, SQL, and graph computation.
    • The following machine learning benchmarks were run: matrix factorization, logistic regression, and support vector machines.
    • The SQL benchmark executes five different queries from the select, aggregate, and join categories and records the aggregate time spent to execute all five queries.
    • The following graph computation benchmarks were run: page rank, triangle count, and singular value decomposition++.
  • Benchmark Suite for Apache Spark Github link: https://github.com/SparkTC/spark-bench

IBM Data Engine for Hadoop and Spark (IDE-HS) Cluster Performance

For the systems and workload compared:

  • IDE-HS infrastructure delivers Spark workload scaling to minimize execution times and reduce batch windows.
  • 2.1X more performance per dollar spent for Spark Logistic Regression based Machine Learning used in model training by wide variety of lines of business.
  • 1.4X more performance per dollar spent for Support Vector Machine (SVM) a Machine Learning algorithm used in product Recommender Systems
  • 1.7X more performance per dollar spent for Spark SQL query processing used widely in Big Data clusters
IBM Data Engine for Hadoop and Spark (IDE-HS) Cluster Performance graph

System configuration

Power S812LC Xeon E5-2620 V3
10-core, 2.92 GHz, 256 GB, RHEL 7.2, 80 threads, OpenJDK 1.8 12-core, 2.4 GHz, 256 GB, RHEL 7.1, 24 threads, OpenJDK 1.8

Notes:

  • All results are based on IBM internal testing of three SparkBench benchmarks consisting of SQL, RDD Relation-Logistic Regression, and SVM.
  • Pricing is based on a comparable system based on XEON E5 2620 V3 and IBM List price as of 1/19/2016.

DB2 BLU analytics workload query performance on POWER8 in SoftLayer

For the systems and workload compared:

  • DB2 with BLU Acceleration® on IBM SoftLayer C812L-L in the cloud delivers cognitive insights 43% faster.
DB2 BLU analytics workload query performance on POWER8 in SoftLayer graph

System configuration

IBM SoftLayer C812L-L Xeon E5-2620 V3
2.92 GHz, 10 cores, 512 GB, Ubuntu 14.04, DB2 10.5 2.3 GHz, 12 core, 512 GB, Red Hat Enterprise Linux (RHEL) 7.2, DB2 10.5

Notes:

  • Results are current as of February 18, 2016.

DB2 BLU analytics fixed time workload performance on POWER8 in SoftLayer

For the systems and workload compared:

  • DB2 with BLU Acceleration on IBM SoftLayer C812L-L in the cloud delivers 65% more insights in the same time.
DB2 BLU analytics fixed time workload performance on POWER8 in SoftLayer graph

System configuration

IBM SoftLayer C812L-L Xeon E5-2620 V3
2.92 GHz, 10 cores, 512 GB, Ubuntu 14.04, DB2 10.5 2.3 GHz, 12 core, 512 GB, Red Hat Enterprise Linux (RHEL) 7.2, DB2 10.5

Notes:

  • Results are current as of February 18, 2016.

IBM Corporation 2017®

IBM, the IBM logo, ibm.com, POWER and POWER8 are trademarks of the International Business Machines Corp., registered in many jurisdictions worldwide. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other product and service names may be the trademarks of IBM or other companies.

The content in this document (including any pricing references) is current as of July 22, 2015 and may be changed by IBM at any time. Not all offerings are available in every country in which IBM operates

THE INFORMATION CONTAINED ON THIS WEBSITE IS PROVIDED ON AN “AS IS” BASIS WITHOUT ANY WARRANTY EXPRESS OR IMPLIED INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE ANDY ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

In no event will IBM be liable for damages arising directly or indirectly from any use of the information contained in this document.

All information contained on this website is subject to change without notice. The information contained in this website does not affect or change IBM product specifications or warranties. IBM’s products are warranted according to the terms and conditions of the agreements under which they are provided. Nothing in this website shall operate as an express or implied license or indemnity under the intellectual property rights of IBM or third parties.

All information contained on this website was obtained in specific environments, and is presented as an illustration. The results obtained in other operating environments may vary.

No licenses, expressed or implied, by estoppel or otherwise, to any intellectual property rights are granted by this website.