Author’s note: If you’d rather listen to me present this topic, you can watch this recorded tech talk.
Many businesses need to create new mobile experiences from data sets that were not structured with mobile in mind.
Earlier this year, we launched a native mobile app version of IBM Developer.
The IBM Developer Mobile app introduced an experimental chatbot component. It was built in just 12 weeks with a very small team of contributors, distributed across four continents! A mini product creation, from concept to corporately approved app, built on IBM Cloud.
We learned a lot building this app, and wanted to share the approaches we took to scaling, monitoring, securing and creating a chatbot, for others to learn from too. We think there are a few common themes that mobile app developers need to consider.
Our young IBM Developer Mobile app uses Elasticsearch to index a library of content. We created APIs around the indexed content for various needs.
Equipped with a set of tailored APIs, we can develop efficient mobile app views – a chatbot, code content views, an event app, all of which liberated the legacy content in fresh, modern ways.
Customer’s expectations are higher than they’ve ever been for fast, mobile, and secure access to information. Cloud-based, container-based solutions with microservices offer the only real flexible, scalable future for enterprise software. Yet only 20% of enterprise software has moved to the cloud.
Our IBM Developer mobile app was created as a side project, which we’ve watched and learned from. The team that built it was new to mobile app development, and had to research, learn, and adapt quickly, so that the app was robust and scalable for thousands of people to use at the same time. We thought that we’d turn our experience into a collection of code patterns and articles that enterprise teams, and independent developers, might learn from too. A lot of our approaches are ‘standard’ patterns for surfacing and scaling data with Kubernetes or OpenShift.
Our work started at the tail end of 2018. We scoped out milestone stepping stones, not more than 2 weeks apart. Each milestone was purposefully a mini internal product, albeit some of those mini products did very little.
We used an agile approach effectively. We worked hard to compete the tasks for each milestone, to build with confidence in time for delivering a working, published app for launch at the 2019 Think Conference in San Francisco. It meant not compromising on a very carefully defined plan, communicating well through standups, and being honest at each retrospective along the way.
Cloud-based solutions and agile development are made for each other. Cloud development offers microservices that can be created or tweaked flexibly to host new features, bug fixes, or improved usability. Continuous Integration and Continuous Deployment (CI/CD) mean that any flaw is almost immediately noticeable, and that the health of the system that you are creating is constantly tested and monitored. We would not have been able to create our app in such a short time any other way.
A common problem for publicly consumable mobile apps is how to develop for scale. Turning an idea into a working app is one step, but what happens when 1000s of people access the mobile app at the same time? Developing a mobile app at scale is an especially thorny problem for chatbots because they will need to process thousands or tens of thousands of messages at a time.
The IBM Developer Mobile app backend queues incoming messages so that they can be processed at a pace that the Watson Assistant service and business logic can handle. The mobile app relies on a RabbitMQ implementation to queue the messages.
From there the messages are passed into Watson Assistant where the messages are parsed to understand the intent and the entities (subjects) that they relate to.
The intent and entity information forms the basis of the queries that are passed into Elasticsearch. The Elasticsearch was indexed to align similarly to the Watson Assistant conversation model, so that the confidence levels are as high as possible for the queries sent.
Business logic related to the chatbot is coded in Node.js and runs in a Kubernetes cluster. The Elasticsearch endpoints run in Java in Kubernetes (Open Liberty is the underlying server for the Java microservice), too. This system architecture highlights the versatility of the programming approaches where different engineers on the team may have differing coding skills or when one programming language works well to solve a specific problem.
Creating a chatbot built on legacy data
Many companies or organizations are interested in creating text or voice interfaces for access to their services or legacy data. For example, a government agency that might want to automate questions about a policy document, or a realty firm that wants to create voice interfaces around property searches.
Some people perceive that it is fast and easy to create a chatbot. While it is quick these days to put the framework in place to build a chatbot around, a quality experience for a chatbot just can’t be made instantly.
Elasticsearch is a powerful and efficient way to index a set of text-based data around pre-determined set of topics. It also is a natural fit for aligning with a conversation model.
Indexing legacy data using Elasticsearch can break down non-structured text data into data structures that can fuel a chatbot.
We’ve published a code pattern all about integrating Elasticsearch with Watson Conversation, just like we did with building the IBM Developer Mobile app.
Testing for scale using Artillery
How can you know if your mobile app is going to perform the same way for 1 developer as for 100,000 developers, before 100,000 developers try it?
By simulating thousands of messages being sent to the app at the same time, it is possible to explore the behavioral characteristics of a system under stress.
For the IBM Developer Mobile app, we used a tool called Artillery. Artillery allows a developer to easily simulate customized volumes and customized rates of message input to a web service. It is perfect for helping us model the usage of the app with simulated message input. And, combining Artillery with LogDNA helps us understand the behavior of the system under stress.
Monitoring usage with LogDNA
The IBM Developer Mobile app is instrumented with LogDNA for monitoring the requests between the mobile applications and the cloud software that is serving them.
LogDNA helps us understand the ratio of requests between Android and iOS devices. It also illustrates the frequencies and patterns of use.
For instance, we were able to correlate a spike in usage of the app with an announcement about the availability of it at an event in India.
It is fairly quick and easy to set up a mobile app with LogDNA. The return on investment is huge. LogDNA has such a short set up time, and it offers such insightful results that enable evidence based decisions about the value and evolution of a mobile app.
See this article for more guidance on using LogDNA with container-based applications.
Our team loved building this app. For many of them, they had never created a published mobile app before or worked to establish a new cloud-based system intended to scale.
We learned like all agile development teams learn…by failing along the way. For example, the chatbot worked for 1 person or 5 people at a time, but completely failed when 10 of us tried at once; so, that got us scrambling and researching for solutions.
In our opinion, most mobile apps and most websites that are intended to scale will face the same challenges that we faced, and developers can benefit from the same solutions.
For the IBM Developer website, we produce in depth examples called code patterns, in homage to the Design Patterns book that inspired many of us earlier in our careers.
We believe that the themes we discussed in this article are modern day patterns for cloud development. We hope to add some more code patterns inspired by our mobile app in the months to come.