0062ff Events

Online Conference

Digital Developer Conference: AI & Cloud


The Digital Developer Conference: AI & Cloud is designed for developers interested in Cloud and AI technologies by addressing the unique needs of coders. At this free online conference, get hands-on experience and engage with expert developers who will share insights on topics ranging from AI/ML innovation from IBM AI Research, open-source deep learning, model bias identification, multicloud best practices, cloud security with DevSecOps, and getting the most out of cloud native development. See client stories and how technology is being deployed to address some of the biggest issues facing developers today.

What to expect from the Digital Developer Conference

Convenience Icon


A complete event that allows you to participate from the comfort of your office, home or anywhere in between using a mobile device or laptop.

Sessions Icon

4 Keynote Sessions
6 Technical Deep Dives
2 Interactive Labs

Gain practical skills and knowledge from expert developers in cloud, AI, open source app and container security, multicloud management, deep learning, model bias avoidance, app modernization, cloud native programming best practices, and more.

Learning Icon

Hands-on Learning for Developers

Programming labs and exercises designed for developers by developers to help you get started coding right away.

Badges Icon

Gain Eminence with Badges

Obtain industry recognized skills supported by IBM credentials and proudly display to your friends, colleagues, and co-workers via social media.

Who should attend?

Cloud Developers

The Cloud Native developer track will provide developers with insights into key technologies to rapidly build secure applications that can be managed and optimized across multiple cloud providers. Come up to speed with new open source tools for native container-based development, testing and deployment. Obtain insights from the field on the best practices for creating secure microservices-based applications. Get hands-on with the IBM Kubernetes service and learn how to deploy containerized applications. Complete the Cloud Native track and receive the IBM Cloud Native badge.

ML Developers

The Machine Learning developer track is ideal for developers and data scientists interested in collaboration across teams, using top open source tools and scaling at enterprise speed. Learn how to build, train, deploy and manage effective models while getting hands-on experience understanding how to recognize and avoid model bias. Complete the Machine Learning track and receive the IBM Machine Learning badge.

Digital Conference Schedule and Sessions

New York Local Time, Eastern Standard Time (GMT-5)
Cloud Native Development Track Machine Learning Development Track
12:00 - 12:05 Modernize and Infuse Intelligence into Applications on a Hybrid Cloud
Hillery Hunter, Vice President and CTO, IBM Cloud; IBM Fellow

Speaker Bio
Hillery is CTO of IBM Cloud, responsible for technical strategy for IBM's cloud-native and infrastructure offerings. Prior to this role, she served as Director of Accelerated Cognitive Infrastructure in IBM Research, leading a team doing cross-stack (hardware through software) optimization of AI workloads, producing productivity breakthroughs of 40x and greater which were transferred into IBM product offerings. Her technical interests have always been interdisciplinary, spanning from silicon technology through system software, and she has served in technical and leadership roles in memory technology, Systems for AI, and other areas. She is a member of the IBM Academy of Technology and was appointed as an IBM Fellow in 2017. Hillery is a BS, MS, and PhD graduate of the University of Illinois at Urbana-Champaign.

12:05 - 12:25 Keynote 1: Containerized Workloads on the IBM Cloud
Chris Rosen, Offering Manager, IBM Cloud Kubernetes Service and Red Hat Openshift on IBM Cloud
Keynote 1: AI for AI: Data Science Simplified
Sam Lightstone, IBM CTO for Data

Kubernetes has emerged as the optimal platform for deploying container-based applications because of its scalability, extensibility, and community. Organizations are using Kubernetes to streamline how developers deploy and update microservices applications with self-sufficiency and agility. IBM provides Kubernetes and OpenShift, a kubernetes platform with extensions to enhance developer productivity, as managed offerings on the IBM Cloud. This keynote will cover IBM's strategy with Kubernetes, the benefits to developers of a managed Kubernetes offering and recent customer success stories from the Weather Company and Steelhouse.

Speaker Bio
Chris Rosen is a Program Director, Offering Management, for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud. Chris has held a variety of roles in his 19-year career with IBM and is currently responsible for delivering IBM’s container strategy by working closely with customers, partners, development, design, and research. He has a BS in Information Technology and an MBA, both from Rochester Institute of Technology.

A new era of ML development is upon us where AI is being used to facilitate the development of machine learning applications. IBM CTO for Data, Sam Lightstone, will demonstrate several new advances for ML developers that simplify the process of data science and the development of machine learning applications. This includes integrated tooling across the most popular open source runtimes and authoring tools. Automated ML development with AI for data preparation, model development, feature engineering, and hyper-parameter optimization. As well, new technology for trust and transparency in AI to understand why AI is making decisions and to discover unwanted bias that may creep into models. Sam will also discuss how AI has evolved to reach this point, and the fundament shift that is changing the art of computer programming as a consequence.

Speaker Bio
Sam Lightstone is IBM Chief Technology Officer for Data, IBM Fellow and a Master Inventor in the IBM Data and AI group. He leads a number of technical teams in product development for relational databases, data warehousing & big data, cloud computing, analytics for IoT, data virtualization, data movement, and machine learning. He co-founded the IEEE Data Engineering Workgroup on Self-Managing Database Systems. Sam has more than 65 patents issued and pending and has authored 4 books and over 30 papers. Sam’s books have been translated into Chinese, Japanese and Spanish. In his spare time he is a an avid guitar player and fencer.

12:25 - 12:45 Keynote 2: Multicloud management of applications
Michael Elder. Distinguished Engineer
Keynote 2: Learn more from Less: Latest from IBM Research in Artificial Intelligence and Deep Learning
Dr. John R. Smith, IBM Fellow, AI Tech, IBM Research AI

As enterprises deploy their application portfolios across multiple clouds for resiliency, security, and economics, new challenges are cropping up. Organizations need visibility to what clusters are deployed where, including their health and operational capacity. They need automated governance and management of policies for security, applications, and infrastructure. Deployed applications need to be managed in a seamless way across the hybrid multicloud architecture. This keynote will show how IBM Multicloud Manager provides what enterprises need to implement an application-centric approach to managing applications across multiple clouds.

Speaker Bio
Michael is the IBM Distinguished Engineer for the IBM Multicloud Platform. Michael holds an M.S. in Computer Science from the University of North Carolina-Chapel Hill. He has numerous awarded patents and has been honored with three IBM Outstanding Technical Achievement awards. He is a co-author of Kubernetes in the Enterprise published by O’Reilly. Michael holds a B.S. in Computer Science from Furman University and an M.S. in Computer Science from the University of North Carolina-Chapel Hill.

Dr. John R. Smith, IBM Fellow, AI Tech will present an overview of IBM Research's ongoing efforts on Artificial Intelligence (AI). With the recent success of AI applied to narrow tasks, there is tremendous interest to expand its application to a broader set of problems in enterprise domains. This brings new requirements to learn more from less, achieve trustworthy AI in practice, and enable far greater scaling of AI models for enterprise applications. In this talk, we describe how IBM Research is developing methods for advancing, trusting, and scaling AI for a broader set of enterprise applications.

Speaker Bio
Dr. John R. Smith is an IBM Fellow and Manager of AI Tech at IBM T. J. Watson Research Center. He leads a variety of research topics in Artificial Intelligence (AI) involving vision, multimedia, speech, language, knowledge, and interaction. Dr. Smith has authored several hundred papers in leading AI conferences and journals and has served as Editor-in-Chief of IEEE Multimedia. Dr. Smith is a Fellow of IEEE.

12:45 - 1:00 Lab: IBM Cloud account setup and IBM Cloud Kubernetes Service cluster creation
Justin McCoy, Developer Advocate

A guided tour of IBM Cloud; sign-up for an account, get feature codes to access additional services for free, and create your first managed Kubernetes cluster

Speaker Bio
Justin McCoy is a Developer Advocate with 17 years of experience around the latest open source technologies to enterprises with IBM's big iron, he knows what it takes to move from ideas to production. With a passion for developers and the client experience, he is sculpting a beautiful and simplified world through software in the areas of Cloud Computing, Machine Learning, Deep Learning, and Data Science.

1:05 - 1:45 Breakout 1: Microservices in practice
Roland Barcia, Distinguished Engineer - CTO: Garage Solution Engineering
Breakout 1: AI behind the US Open, Wimbledon, Masters, and ESPN Fantasy Football
Aaron K. Baughman - IBM Distinguished Engineer, Master Inventor

The term microservices has become synonymous with cloud native applications while at the same time lacking prescriptive criteria for creating microservices. As defined, the microservices architecture, has a number of key guiding principles which are followed in greater and lesser degrees in practice. This talk will touch on these principles, but focus on the goals that teams are really after when they say they are going to "modernize an application to a microservices architecture". Learn how to focus on the transformational practices that are essential to creating effective cloud native applications.

Major sporting events already draw millions of viewers and attendees, but organizers are constantly optimizing the experience, seeking new ways to bring fans into the game and share content from events.

How does the United States Tennis Association (USTA) efficiently parse thousands of hours of tennis footage, looking for the best moments to present to fans? How do they do it at scale?

How can fantasy football managers possibly sift through all of the news, analysis, and statistics generated about their players on a weekly basis in order to make more

Speaker Bio
Aaron K. Baughman is a Distinguished Engineer and Master Inventor within IBM GBS Interactive Experience focused on Artificial Intelligence and Emerging Technologies. He has worked with ESPN Fantasy Football, NFL’s Atlanta Falcons, The Masters, USGA, Grammy Awards, Tony Awards, Wimbledon, USTA, US Open, Roland Garros, Australian Open, confidential US government agencies and Indonesia water company PDAM.He was the principal data scientist for AI Video Highlights that won the IBMCorporate Technical Award and Best of IBM. He is the project lead for ESPN Fantasy Football. Further, he worked on Predictive Cloud Computing for sports that has been published in IEEE and INFORMS. He was a technical Lead on a DeepQA (Jeopardy!) project and an original member of the IBM Research DeepQA embed team. Early in his career, heworked on biometric, software engineering and search projects for US classified government agencies. He has published numerous papers and aSpringer book. Aaron holds a B.S. in Computer Science from Georgia Tech, a M.S. in Computer Science from Johns Hopkins, 2 certificates from the Walt DisneyInstitute and a Coursera Deep Learning certificate. Aaron is an 3-time IBM Master Inventor, IBM Academy of Technology member, Corporate Service Corps alumni, a lifelong INFORMS Franz Edelman laureate, global Awards.ai winner and a AAAS-Lemelson Invention Ambassador. He has 145 patents with over 100 pending.

1:45 - 2:25 Breakout 2: A flotilla of open source tools for container-based application development and deployment
Erin Schnabel, STSM, Cloud Native Programming Models
Breakout 2: Deploy State of the Art Deep Learning Models in under an Hour
Fred Reiss, Chief Architect CODAIT, IBM

Moving to the cloud is more than taking an existing monolithic application and running it on Kubernetes. The way applications are developed, built, and deployed is radically changed, alongside business and operational processes. With these changes, the move to the cloud has been difficult for many. The community is continuing to advance, and new open source software projects such as Appsody, Codewind, and Kabanero address these obstacles head on to reduce complexity for individual developers and organizations. In this breakout, learn how developers and solution architects can benefit from these advancements and move to the cloud faster.

Speaker Bio
Erin Schnabel is a Senior Technical Staff Member at IBM working with microservice architectures, cloud native applications, composable runtimes and Java. Erin has 20 years under her belt, as a developer, technical leader, architect and evangelist. She is passionate about developer experience, particularly with how the evolving cloud ecosystem impacts developers and their applications. She prefers learning (and teaching) by doing, as demonstrated by her open source exploits with “Game On!” (https://gameontext.org), a text adventure game that provides developers with a playground to experimenting with backend technologies.

Deep learning --- machine learning with deep neural networks --- represents one of the largest steps forward in artificial intelligence in the past ten years. In this talk, we'll look at deep learning technology from the perspective of a developer using these advanced models in enterprise applications. We'll talk about the main components of a deployable deep learning model, as well as the potential pitfalls and best practices for managing those components. Then we'll go through a hands-on demonstration of model deployment using one of the deep learning models from the IBM Developer Model Asset eXchange (https://developer.ibm.com/exchanges/models/). We'll walk through the process of identifying the components of the model and deploying each component to the Watson Machine Learning cloud service. Finally, we'll tie everything together with a cool demo of the model in action!

Speaker Bio
Fred Reiss is the Chief Architect at IBM’s Center for Open-Source Data and AI Technologies in San Francisco. Fred received his Ph.D. from UC Berkeley in 2006, then worked for IBM Research Almaden for the next nine years. At Almaden, Fred worked on the SystemML and SystemT projects, as well as on the research prototype of DB2 with BLU Acceleration. Fred has over 25 peer-reviewed publications and six patents.

2:25 - 3:05 Breakout 3: DevSecOps: putting security in DevOps
Omid Meh, Developer Advocate, Client focused
Breakout 3: Tools for Machine Learning Developers w/Kubernetes using Open Source
Animesh Singh, STSM and Program Director, IBM

As modern teams move towards Agile and embrace DevOps to shorten their time-to-market, they are quickly met with the challenges of the traditional security models. In waterfall models security analysis and implementation used to take months and was done towards the end of the product cycle. However, this model doesn’t work in the fast-paced world of DevOps! In this talk, we will explore, at a high level, how security can be included in DevOps, creating DevSecOps. Join us to learn about the shift-left movement where security is moved left from the end of the product cycle to become an integral part of every stage of our DevOps pipeline.

Speaker Bio
Omid Meh is a security expert with vast experience in implementing DevOps for large projects. His past experience and education also make him knowledgeable in the world of blockchain, computer hardware, embedded systems, software architecture, and software testing. Rumor has it, he has been seen going around town snapping photos when he is not at work!

Build, Train, Deploy and Manage: With the breadth of sheer functionalities which need to be addressed in Machine Learning world around building, training, serving and managing models, getting it done in a consistent, composable, portable, and scalable manner is hard. The Kubernetes framework is well suited to address these issues, which is why it’s a great foundation for deploying ML workloads. Kubeflow is designed to take advantage of these benefits. In this talk, we are going to address how to make it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and support the full lifecycle Machine Learning using open source technologies like Kubeflow, Tensorflow, PyTorch, Argo, Knative, Istio and others. We are going to discuss how to enable distributed training of models, model serving, canary rollouts, drift detection, model explainability, metadata management, pipelines and others.

Speaker Bio
Animesh Singh is a senior technical staff member (STSM) and program director for IBM Watson and Cloud Platform, where he leads machine learning and deep learning initiatives on IBM Cloud and works with communities and customers to design and implement deep learning, machine learning, and cloud computing frameworks. He has a proven track record of driving design and implementation of private and public cloud solutions from concept to production. Animesh has worked on cutting-edge projects for IBM enterprise customers in the telco, banking, and healthcare industries, particularly focusing on cloud and virtualization technologies, and led the design and development first IBM public cloud offering.

3:05 - 4:05 Lab: Hands-on with Kubernetes
Tim Robinson, Developer Advocate
Lab: Evaluating Models for Bias
Javier Torres, Client Developer Advocate & Solutions Architect at IBM

Take a hands-on tour of the IBM Kubernetes Service and get experience with common Kubernetes objects and abstractions. You'll deploy a web terminal to manage your cluster and then create a guestbook containerized application. Add a Kubenetes Service to expose the application and then access it from your workstation. Next, you will add a Redis data store to the application to allow horizontal scaling of the application and see how applications in Kubernetes discover resources in a cluster. In the last part of the lab you will learn about the Operator Framework and how to use the IBM Cloud operator to manage a Watson API service. After creating the service and credential binding using a custom resource definition (CRD), you'll add a microservice to the application for analyzing input to the guestbook application.

Speaker Bio
Tim Robinson is a Silicon Valley based developer and Master Certified IT Specialist specializing in hybrid cloud architecture, security, and networking. He is focusing on developer and community programs and education for creating solutions based on Cloud Foundry, Docker, and other open computing platforms. His hobbies include running, craft beer-making, and flying high power model rockets.

IBM Watson OpenScale is an open environment that enables organizations to automate and operationalize their AI. OpenScale provides a powerful platform for managing AI and ML models on the IBM Cloud, or wherever they may be deployed, offering these benefits:

  • Open by design — Watson OpenScale allows monitoring and management of ML and DL models built using any frameworks or IDEs and deployed on any model hosting engine.
  • Drive fairer outcomes — Watson OpenScale detects and helps mitigate model biases to highlight fairness issues. The platform provides plain text explanation of the data ranges that have been impacted by bias in the model, and visualizations helping data scientists and business users understand the impact on business outcomes. As biases are detected, Watson OpenScale automatically creates a de-biased companion model that runs beside deployed model, thereby previewing the expected fairer outcomes to users without replacing the original.
  • Explain transactions — Watson OpenScale helps enterprises bring transparency and auditability to AI-infused applications by generating explanations for individual transactions being scored, including the attributes used to make the prediction and weightage of each attribute.

When you have completed this Lab, you’ll understand how to:
  • Build a custom model serving engine using Keras
  • Access the custom model using a REST API
  • Log the payload for the model using Watson OpenScale

Speaker Bio
Javier Torres, Client Developer Advocate & Solutions Architect at IBM


Hillery Hunter
Vice President and CTO, IBM Cloud; IBM Fellow

Michael Elder
IBM Distinguished Engineer

Margriet Groenendijk
Developer Advcocate

Roland Barcia
IBM Distinguished Engineer

Erin Schnabel
STSM, Cloud Native Programming Models

Omid Meh
Developer Advocate

Sam Lightstone
IBM CTO for Data

Riya Roy
Developer Advocate

Dr. John R. Smith
IBM Fellow, AI Tech, IBM Research AI

Javier Torres
Client Developer Advocate & Solutions Architect at IBM

Aaron K. Baughman
IBM Distinguished Engineer, Master Inventor

Animesh Singh
STSM and Program Director, IBM

Fred Reiss
Chief Architect CODAIT, IBM

Tim Robinson
Developer Advcocate

Binu Midhun
Developer Advocate

David Okun
Developer Advocate

Justin McCoy
Developer Advocate

Chris Rosen
Offering Manager, IBM Cloud Kubernetes Service and Red Hat Openshift on IBM CloudDeveloper Advocate

Edmund Shee
Developer Advocate


IBM Cloud

IBM Machine