What do companies like LinkedIn, Uber, Netflix, Yelp and IBM have in common? They have built their businesses on top of Apache Kafka.

Why join us in San Francisco?

The Kafka Summit is an opportunity to engage with the developer community about a revolution transforming industries – streaming platforms at massive scale. As the Kafka Summit quickly approaches, we want anyone who loves discussing, coding in or building with Kafka to join IBM while we’re in San Francisco. As a gold sponsor at the Kafka Summit we’re busy getting ready for an engaging discussion with the Kafka community on code, community and culture.

What is IBM doing with Kafka?

Message Hub is IBM Cloud’s implementation of Kafka. Message Hub provides event distribution services for and managed integrations with key IBM Cloud services such as the Watson IoT Platform, Cloud Object Store, and OpenWhisk.  IBM has been contributing to the Kafka project since 2015. IBM’s community contributions to date have included Kafka client enhancements, security enhancements, server API enhancements, system and unit tests, and numerous defect fixes.

In the Expo

Serverless platforms, like Apache OpenWhisk, provide a runtime that scales automatically in response to demand, resulting both in lower cloud resource costs and additional business value. Visit the IBM Booth#112 in the Golden Gate Ballroom to view the IBM journey demo “Create autoscaling actions that respond to message streams”. Join our Developer Advocates Marek Sadowski and Lennart Frantzell who will showcase how the application demonstrates two OpenWhisk actions (written in JavaScript) that read and write messages with IBM Message Hub (based on Apache Kafka). The use case demonstrates how actions can work with data services and execute logic in response to message events. Expo hours are from 7:30am – 6:00pm

Breakout Sessions

IBM is proud to feature the following speakers at the Kafka Summit – make sure to schedule their sessions into your day on Monday, August 28: Holden Karau, Principal Software Engineer 11:20 am – 12:00 pm, Streams Track Streaming Processing in Python – 10 ways to avoid summoning Cuthulu Python & want to process data from Kafka? This talk will look how to make this awesome. In many systems the traditional approach involves first reading the data in the JVM and then passing the data to Python, which can be a little slow, and on a bad day results in almost impossible to debug. This talk will look at how to be more awesome in Spark & how to do this in Kafka Streams. Edoardo Comar, Senior Developer, Message Hub Andrew Schofield, Chief Architect, Hybrid Cloud Messaging 2:40 pm – 3:20 pm, Systems Track Kafka and the Polyglot Programmer Overview of the Kafka clients ecosystem. APIs – wire protocol clients – higher level clients (Streams) – REST Languages – the most developed clients – Java and C/C++ – the librdkafka wrappers node-rdkafka, python, GO, C# – why use wrappers Shell scripted Kafka. In addition to sponsoring the overall event, IBM is also the sponsor of the Kafka 1.0 release party on August 28th from 6:10-8:30 pm, join us! Please share with your developer community IBM’s participation and registration details. Register today.  To learn more, visit IBMCode

Join The Discussion

Your email address will not be published. Required fields are marked *