In this final installment of our three part blog series covering the Watson + GBS Challenge weâ€™re sharing the development experiences of three other winning teams – part two here. If youâ€™re looking for detailed understanding and insights directly from the teams themselves, look no further. Hereâ€™s a look at what Paulo Cavoto had to say about the development of Cognitive Head Hunter. What is Cognitive Head Hunter? Cognitive Head Hunter is an application that uses Watson cognitive services to enhanced both the job search and candidate matching processes. What challenge does Cognitive Head Hunter address? Everyday millions of job seekers, employers and head hunters work to make matches in all types of vocational spaces. However, despite huge advancements in the connectedness of structured databases the process remains inefficient. Cognitive Head Hunter seeks to change the game and improve the efficacy and experience for all parties. What API’s did you integrate? Which API do you think offers the most game-changing capability? We integrated Concept Insights to extract the key concepts fromÂ the candidate’s curriculumÂ and from the job offering. Additionally, we added Personality Insights to helpÂ understand theÂ personality characteristics, needs, and valuesÂ the candidate expresses via his/her LinkedIn profile orÂ uploaded curriculum. On a scale of 0-10 what was your overall team’s experience level using Watson Services or Bluemix prior to this event (0 = none, 10, = expert user)? Our team was probably an 8Â for Bluemix and 5Â for Watson services. We all had played around with other projects using Bluemix, but this was really not the case with Watson. This was truly our first experience creating something meaningful and integrated with Watson services.Â How difficult would you say it was to build your application using Bluemix platform? Really, really simple. A good example of this simplicity was how we solved a database requirement during our development. We were using the Watson Concept Insight corpus to store our candidates analysis but found out that we needed to store its IDs to improve the search performance. We simply went to Services -> New Service -> Mongo and boom! We were instantly provided an instance and credentials and solved that challenge.Â What advice would you give to a developer looking to build with Watson API’s who hasn’t done so before? Our advice is to checkout the demos and the DevWorks tutorials first, Watson world is huge, find some use of the APIs that makes sense to you first and then build something similar or iterative to that use case or concept. Once youâ€™ve done that youâ€™ll be ready to take on a more ambitious original challenge. What do you see as the next step for your application’s development? The next steps for us is to integrate with larger databases of job opportunities andÂ withÂ another social network. Another step will be to improve integration with social networks like Twitter. We feel candidate’sÂ personality can be better understood if given voluntary access to professional these profiles. For more information about Cognitive Head Hunter please feel free to email Paulo directly. Hereâ€™s a look at what Geomy George had to say about the development of Customs Risk Advisor. What is Customs Risk Advisor? Risk Advisor is a mobile application that provides assistance to customs officers to assess risk associated with a consignment. The app uses Watson APIs to analyze unstructured data which is not currently leveraged in most modern risk assessment tools. What challenge does Customs Risk Advisor address? Each year governments around the world work 24/7 to manage the inbound and outbound movements of goods from their boarders. This process – to secure public safety and prevent economic fraud – costs such entities and their society significant financial resources. Customs Risk Advisor seeks to improve the efficiency of this process and reduce costs. What API’s did you integrate? Which API do you think offers the most game-changing capability? We have integrated the following APIs in our solution: Entity Extraction, Concept Tagging, Personality Insights, Concept Insight and Q&A. From our application context we believe that Watsonâ€™s Concept Tag API is the most game-changing. This services allows the app to categorize concepts (ex. organized crime, smuggling etc.) from a given unstructured data which contributes to more accurate and reliable determination of risky consignments. On a scale of 0-10 what was your overall team’s experience level using Watson Services or Bluemix prior to this event (0 = none, 10, = expert user)? Watson Services – 0 Bluemix – 7 When our team began developing this application we had absolutely no experience using Watson APIs. However, we were comfortable in the Bluemix platform having used the platform for several other projects. How difficult would you say it was to build your application using Bluemix platform? From our experience, developing an app on Bluemix was relatively simple. Availability of run time environments are on demand which takes away the burden of installation and configurations. Additionally, sample apps and documentation makes it much easier for developers to start something in Bluemix. However, one suggestion weâ€™d like to offer is that IBM provide a provision to build a small data corpus via Bluemix to support beta APIs for Watson. This will bring more confidence to developers and help them showcase their solutions. All said and done, we strongly believe the Bluemix platform is getting stronger and stronger. What advice would you give to a developer looking to build with Watson API’s who hasn’t done so before? We recommend following recommendations to developers who want to work with Watson APIs. 1. Its very important to identify a strong business use case before integrating Watson API’s. 2. Your expected API results are only as good as how well you train Watson with relevant documents. This is not trivial. 3. Try to refine your input data as mush as possible to get best results from Watson. 4. Understand the the unique workings of each API and wisely select the Watson APIs based on your requirement. For more information about Customs Risk Advisor please feel free to email Geomy directly. Hereâ€™s a look at what Sudheendra Sreedharamurthy had to say about the development of The Rover Project. What is The Rover Project? The Watson Rover is an autonomous robotic system able to navigate an environment using visual cues and voice interactions. It demonstrates our platform capabilities to establish communication with multiple IoT devices to solve complex problems using Watson services. What challenge does The Rover Project address? This IoT project leverages Watson’s cognitive capabilities to solve complex problems in spaces as diverse as human assisted navigation, disaster recovery and emergency management. What API’s did you integrate? Which API do you think offers the most game-changing capability? In the technology demonstrator, we integrated “Language Identification”, “Machine Translation”, “Visual Recognition” and “Speech-to-Text” APIs. Of the APIs integrated, we consider Visual Recognition as a game changer given our application context. The biggest challenge in Robotics and IoT is computer vision – or making sense of visual inputs. The standard image processing algorithms available through technologies such as OpenCV are limited to identifying edges, contours, colors etc. and does not generate any intelligence or insight about the image as a whole. Whereas Visual Recognition provides a completely new capability to analyze images and develop an intelligence with a fair amount of depth. In other words, VR takes image processing from sheer logic based number crunching to statistical and cognitive analysis. For example, VR not only recognizes human faces, but also can determine age (child, adult), count (single, group) and context (protest, gathering etc). VR can be trained to analyze images specific to the application context as well. This means we can now use Watson to enhance computer vision in Robotic applications and bring in a cognitive element. Moreover, this can be achieved using fairly simple and low cost processing nodes. On a scale of 0-10 what was your overall team’s experience level using Watson Services or Bluemix prior to this event (0 = none, 10, = expert user)? Our team had little or almost no experience with Watson or Bluemix prior start of this event (one of us had dabbled a bit into Bluemix). So, we can say that we were possibly at level 1 or maximum 2 at start of this event. How difficult would you say it was to build your application using Bluemix platform (elaborate a bit here)? It obviously is not difficult considering that we built a fairly complex robot that uses Bluemix and Watson in less than a month. The tutorials and sample code available in developer cloud was of great help. It gave us a head start and accelerated completion of the technology demonstrator. For example, the key challenge for us was to connect the Raspberry Pi to Bluemix and give it a capability to leverage Watson. There is a detailed recipe (steps, sample code) in developer cloud to connect a Raspberry Pi to Bluemix (IoT Foundation). This became the starting point and helped us become familiar with the technologies and integration architecture. We were then able to translate the learnings and choose the right architecture that suited our end purpose. The other advantage we had with Bluemix was rapid prototyping. We could make quick changes or additions to base code to see how a particular feature or function works. We could do frequent deployments and test multiple scenarios without worrying about underlying platform complexities. The flexibility of using the hybrid model where some part of code is sitting on cloud platform whereas other parts are local in the IoT devices allowed us to develop an architecture that was optimal as well as scalable. What advice would you give to a developer looking to build with Watson API’s who hasnâ€™t done so before? Our advice is to roll up the sleeves and get your hands dirty understanding Watson APIs. Cognitive computing is a paradigm shift. We believe it can be best understood by choosing few right applications and building it hands on. The surprising thing is that it is not time consuming. Anyone should be able to put together a basic prototype with few good hours spent over a weekend. What do you see as the next step for your application’s development? As mentioned earlier, we have the technology demonstrator complete. The next step is to extend the same into a full fledged prototype. Our choice is to develop a rover that can be used in disaster recovery situations primary for reconnaissance purpose. The rover will have capability to interface with human rescue teams using natural voice and will be able to use cognitive vision capabilities to identify human survivors, assess damage or search for specific objects. To achieve this, we will need a more complex coordination of multiple Watson APIs. We have started the ground work for the same and expect to start making some real progress over next few days. What are the â€śtop hacksâ€ť you would like to share with fellow developers? Our top hack is the code that developed to integrate Visual Recognition API into our application. This had four components: 1. Precision Imaging – this covered precision navigation of rover to ensure proper imaging of target for VR purpose 2. Image Pre-processing – this involves recognizing and extracting parts of image that is required for performing visual recognition 3. Watson VR Analysis of extracted part of image 4. Analyze Watson VR output to take further actions While the actual code to perform VR analysis (step 3) was not very complex, the precise imaging and image pre-processing required some serious amount of coding. For more information about The Rover Project please feel free to email Sudee directly.