At WWDC2017, Apple announced iOS 11 along with a new AI based framework, Core ML. Core ML would move the iOS app development to the next level of enterprise business that demands AI and decision-making capabilities.

With server-side Swift’s ability to run on different OS platforms, including iOS, I did a quick PoC to explore and understand the Core ML capabilities. I integrated Kitura with iOS App and exploited the native iOS Core ML framework service as an open external API. In other words, Core ML is not a tightly bound service to use only with its built-in native iOS SDK. By integrating Kitura in the iOS App, you can make the service available for use with any external device. Now, a single iPhone device or an Xcode Simulator can serve as a mini server to provide the AI service to other devices.

The first part of this article is an introduction to Core ML. You will learn how to integrate Core ML with Kitura Server Side Swift. The second part of this article is a detailed comparison of Watson Visual Recognition and the Core ML service. This comparison should help you understand how enterprise ready these services are.

Core ML for iOS

Core ML is a framework that enables devices to run and process machine learning models. At the heart of the machine learning execution process is mlmodel, the trained model. Apple’s developer documentation website includes links to some of the Core ML compatible format trained machine learning model files, for you to download and try.

Image Classification and Object Identification are interesting features of the deep learning concept. Its application and usage statistics in today’s enterprise development is also a key factor to consider for its gaining popularity. I created a simple iOS application that takes an image as input and uses a Core ML API and trained mlmodel file to recognize and classify the image. This app then converts to a hybrid Kitura Swift integrated application. For more information about this, read Kitura/iOS: Running a Web Server on your iPhone.

The idea to integrate server-side swift with iOS app is to expose the core ML service like any other standard REST API that accepts image data and returns the JSON result.

The section below gives the solution to achieve this.

Delegation protocol and Server Swift multiple router handler

Delegation is a design pattern that enables a class or structure to hand off (or delegate) some of its responsibilities to an instance of another type. Delegation is one of the most commonly used patterns in iOS programming with protocol implementation. Server-side Swift router is designed to handle the request response with multiple handlers. Combining Delegation and the server-side Swift router gives the ability to divide the request handler tasks into subtasks and delegate any iOS specific subtask (API hits counting task, in this example) to the class instance methods. This combination is a quick workaround to extend iOS features as a Kitura Swift API service.

A few other use case examples are:
• API that does Core ML operations and returns JSON response
• API that sends email using iPhone device’s registered account
• API that returns reverse Geo Coordinates using Map Kit
• API that uses Siri’s intelligence and Siri Kit to process request

In the code snippet below, the Kitura post request module uses three different handlers. The notifyRequest and responseProcessed methods delegate the task to the main view controller class for API requests and responses notification. processRequest calls the Core ML image processing module and returns the JSON result.

Code snippet: Kitura post request module uses three different handlers

func responseProcessed(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {
DispatchQueue.main.async {

func notifyRequest(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {
DispatchQueue.main.async {
self.delegate!.didReceiveRequest(info: request.originalURL)

func processRequest(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) {
var dataa = Data()
do {
let data = try &dataa);

Figure 1 shows the integrated app that runs the Core ML service and exposes the service using Kitura Swift. For every request, the API Hits count increases. The delegates in the main thread handle the API Hits. It’s a success; there’s a working prototype of a Kitura Core ML service running on an iOS device.

I have developed other iOS native only app to consume this service. Read on to know about the comparison that I made to the Watson Visual Recognition service.


Comparison of visual recognition services

Core ML is Apple’s iOS based framework to process machine learning models. Watson is IBM’s cognitive platform built with AI capabilities. Watson provides multiple AI and decision-making services for enterprise business systems and is available as a set of open APIs and SaaS products on IBM Cloud.

Curious about the accuracy and maturity of the systems, I extended the native-only PoC app to run and compare a set of images against:
• Core ML API made with Kitura Swift that uses the pre-trained VGG16 Keras Python model set.
• Watson Visual Recognition Service API. This is a default API service which comes pre-configured with a trained example classifier engine to recognize images

I have taken 15 different digitally processed images for the comparison. I chose processed images to test the accuracy of the trained model services when the input image loses natural color and pixel information. Because of file size issues, the images were further compressed to a JPEG format.

Figure 2 shows the app running with each service for comparisons. Two parameters (as shown in the screenshot) of the service response are extracted, the Classified/Identified string, and the Confidence factor (the ranking is from 0-1).


The result set below shows that Core ML with VGG16 identified 6 out of 15 images correctly. Of those six, two have a higher than 90% confidence index. Watson recognized 11 out of 15 images and had a 90% plus confidence index for seven of the images. There were some instances where the Core ML VGG16 model identified better. For example, the image of tablet pills on the right (shown in Figure 2, above). But, on a larger count, Watson predicted accurately more often. It is important to remember that the samples used are artificially processed and compressed images.


There’s more to this than just the accuracy rate. Another consideration is the trained model. Both Watson and Core ML provide the ability to customize the classifier to use another model to improve accuracy. However, the effort to create and convert a better classifier trained ML model for iOS is much larger than using a simple Watson service. For Watson, you only need to provide a set of positive and negative images, and to train the engine. For Core ML you need to create a complex Python model and convert it into a mlmodel file.
The effort to create and convert a better classifier might be improved in future. For now, though, Watson is a mature and stable service for the enterprise world. I consider Core ML to be an experimental platform until Apple adds significant improvements.

This article demonstrated how to convert any core iOS feature into a working Kitura Swift server-side service and compared the maturity and enterprise development readiness of Core ML and a Watson Service.

All the source code for this article is in my GIT repository, please fork and try out. My next article will analyze the capabilities of converting the Core ML framework into a full-fledged server side framework… stay tuned.


Build more intelligent apps with machine learning
Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks
What’s new in iOS 11.0
Keras: The Python Deep Learning library
Kitura/iOS: Running a Web Server on your iPhone
Unlock the power of cognitive with Watson
IBM Cloud

My Sincere thanks to all my mentors and managers for their support and motivation.

This is not a peer nor expert reviewed article. I based the content on my knowledge and experience. My intent is to share my learnings and findings with fellow developers. The information might not be 100% accurate.

The enterprise demands powerful and intuitive frameworks to transform applications to the next level of intelligence. It is important to learn, understand, and adopt the trending technologies early. Keep coding and keep learning. If you have any questions or comments, you can reach me at

Happy Coding!!!!


3 comments on"Core ML with iOS/Kitura Swift
: A comparison study with Watson Service"

  1. Sabarinath Venugopalan August 18, 2017

    Well written, Sangeeth. I’ve read all our articles now and would encourage you to keep writing. May I also suggest that you take your point of view to our client. Please come and see me next time I am in Bangalore. Regards, Sabari

  2. Good Article, Sangeeth

Join The Discussion

Your email address will not be published. Required fields are marked *