In this video:
- Romeo Kienzler, Chief Data Scientist, IBM
Romeo explains, humorously, that this is a “beta” talk, put together at the last minute. “Don’t kill me,” he implores. Joking aside, Romeo then describes the basic architecture of deep learning neural networks. In the discussion, he describes and illustrates mathematically such concepts as the “forward pass,” back propagation, and gradient descent. He then lists and describes the four types of parallelisms: inter-model, data, intra-model, and pipelined.
Romeo then briefly describes the Apache Spark topology, followed by a discussion of a number of deep learning neural networks and how each achieves these different types of parallelism. These networks include DeepLearning4J, Apache SystemML, TensorFrames, TensorSpark, and CaffeOnSpark.
The presentation then closes out with a Q&A period.
Follow Romeo as he tackles the most difficult challenges in data science.