In this presentation to the 2017 Data Works Summit in Munich, Romeo Kienzler discusses strategies for parallelization of DeepLearning neural networks.

In this video:

Romeo explains, humorously, that this is a “beta” talk, put together at the last minute. “Don’t kill me,” he implores. Joking aside, Romeo then describes the basic architecture of deep learning neural networks. In the discussion, he describes and illustrates mathematically such concepts as the “forward pass,” back propagation, and gradient descent. He then lists and describes the four types of parallelisms: inter-model, data, intra-model, and pipelined.

Romeo then briefly describes the Apache Spark topology, followed by a discussion of a number of deep learning neural networks and how each achieves these different types of parallelism. These networks include DeepLearning4J, Apache SystemML, TensorFrames, TensorSpark, and CaffeOnSpark.

The presentation then closes out with a Q&A period.

Discovering Data Science with Romeo Kienzler

Follow Romeo as he tackles the most difficult challenges in data science.

Join The Discussion

Your email address will not be published. Required fields are marked *