Overview

Skill Level: Any Skill Level

This tutorial will enable the developers to develop a Machine Learning based iOS application with minimal/ no knowledge of machine learning algorithms. At the end of this tutorial, the developer can create an iOS application which can behave like a human.

Ingredients

  • Xcode (version 9 and above)
  • Swift Programming Language

 

 

Step-by-step

  1. Watson + CoreML Overview

         CoreML, a native machine learning framework from Apple was released in WWDC 2017. Machine Learning models are a combination of datasets and training algorithms. Though CoreML made the consuming of models easy, still generating the model would require adequate knowledge of machine learning. With Watson, even a novice in machine learning can create efficient client ready machine learning models. The Watson and CoreML service allows to add data and train your data to generate the model with few button clicks. You do not have to write any code to generate the model.

    Primarily, for machine learning, Watson was a server-side component and CoreML was a client-side component. But now with Watson and CoreML integration, Watson processing can be leveraged at the client side.

  2. Creating IBM Watson Studio account

         The first step would be to create a IBM Cloud account. Once you create the Watson account as below, you have to add the visual recognition service, which lands you in the Watson page as depicted in the next step.

                Screen-Shot-2018-04-07-at-1.29.53-PM                      Screen-Shot-2018-04-07-at-1.30.25-PM

     

    Screen-Shot-2018-04-08-at-1.31.30-PM

  3. Creating Custom Model

         Open the visual recognition service from the services list and then you would a console as depicted below.

     Screen-Shot-2018-04-07-at-7.48.00-PM

    You can use one of the existing (already created demo models) models from the console above or start creating a custom model. Now click on “Create Model” from the custom model shown below

     Screen-Shot-2018-04-09-at-11.14.52-AM

    Now start adding assets(dataset – images) to the box and you can see a class gets created.

     Screen-Shot-2018-04-09-at-11.15.51-AM

    After adding the assets, the “Train Model” button in the top will be enabled and when pressed the data sets will get trained and you are now ready to utilize the model.

    Screen-Shot-2018-04-09-at-12.03.53-PM

    You can also test the model in browser when you click “Test” as above.

     Screen-Shot-2018-04-09-at-12.04.15-PM

    You can either download the model directly from the browser or copy the “classifier id” and “api key” to consuming from the app.

    Screen-Shot-2018-04-09-at-12.05.53-PM 

     

     

  4. Integrating custom model with iOS Application

         Now that we have created a custom model, let us now create the ios application which can consume the model. Create a new XCode project (make Sure your XCode version is 9 and above.).

    Run the following command to create a cartfile(a dependency manager similar to Cocoapods).

    touch file_name(file name of your choice)

    Then open the created cartfile in Visual Studio Code (any preferable Code editor) and paste the following line.

    github “watson-developer-cloud/swift-sdk”

    Then open the terminal and move the path (where the cartfile is located) and run the following command

    carthage update –platform ios

    This would install all the dependencies and also build the frameworks for you as shown below.

     Screen-Shot-2018-04-08-at-11.19.21-PM

    Now, go to the XCode project we have created and import the visual recognition framework which is available in the carthage folder. Finally that we have set up the Xcode project, lets us start the coding.

     let apiKey = “your_project_api_key”  //This is your Watson visual recognition service api key

    let versionAPI = “current_date”  //This is your Watson visual recognition service api version which is the current date. Even if you give other dates, it doesn’t really matter when you create it for the first time.

    var classifierId = “your_default_classifier_id” //This is your Watson visual recognition service custom model classifier id

    Download/Update the model:

         This part of code from the visual recognition framework, downloads and updates the machine learning model locally.

    visualRecognition.updateLocalModel(classifierID: classifierId, failure: failure, success: success)

    The rest of the iOS code which takes pictures can be referred to from the sample project.

    Classification:

         The classification of the image is done locally and do not need to hit the server which is the speciality of Watson SDK.

    visualRecognition.classifyWithLocalModel(image: image, classifierIDs: [classifierId], threshold: localThreshold, failure: failure)

              Screen-Shot-2018-04-09-at-9.54.59-AM                                               Screen-Shot-2018-04-09-at-10.00.14-AM

                           

  5. Utilizing multiple Machine Learning Models

    Utilizing multiple Machine Learning Models:

         We have created a custom model and successfully integrated with the iOS Application. But a typical Enterprise/ Consumer application requires multiple models for identifying various products. The trial account would allow every user to create only one custom model. So, if you want to create multiple models, you should purchase the service. So once you have purchased and created the models. Then the following iOS code would allow the user to consume multiple models at a time.

    let localModels = try? visualRecognition.listLocalModels()

    Screen-Shot-2018-04-09-at-9.55.53-AM

     

  6. References

    I would also include some of the references,

    IBM Think 2018 announcement:

    https://www.ibm.com/blogs/think/2018/03/ibm-apple-ai/

    Apple Announcement:

    https://developer.apple.com/ibm/

    Sample Project for Reference:

    https://github.com/watson-developer-cloud/visual-recognition-coreml

Join The Discussion