IBM Developer Day | Bengaluru | March 14th Register now
Get the code
By Riya Mary Roy, Sanjeev Ghimire | Published September 17, 2018
Built for developers who are looking to recognize text from images and translate it, this pattern shows how to capture an image, extract text, and translate that text using Tesseract OCR and Watson Language Translator.
Visual content can be much more engaging than plain text. Add some interesting text and cool typography to that image and you’ve got a great way to create an infographic, run an advertisement, or share memes that you find funny. Adding text to your visual image can help you get your message across much more effectively. But, what happens when you need to share this across different geographies with different languages?
This code pattern explains how to create a hybrid mobile app using the Apache Cordova development platform and the Node.js server application running on the IBM Cloud Kubernetes service. The app uses Tesseract OCR to recognize text in images, Watson Language Translator to translate the recognized text, and Watson Natural Language Understanding to extract emotion and sentiment from the text. The mobile app translates the recognized text from the images captured or uploaded from the photo album.
When you have completed this code pattern, you will know how to:
Find the detailed steps for this pattern in the README. Those steps will show you how to:
Think is IBM's flagship technology conference. This year, one of our leading containers dev advocates will be live streaming on…
July 12, 2019
September 7, 2019
Back to top