We’re giving away 1,500 DJI Tello drones. Enter to win ›
Get the code
By Riya Mary Roy, Sanjeev Ghimire | Published September 17, 2018 - Updated September 17, 2018
Built for developers who are looking to recognize text from images and translate it, this pattern shows how to capture an image, extract text, and translate that text using Tesseract OCR and Watson Language Translator.
Visual content can be much more engaging than plain text. Add some interesting text and cool typography to that image and you’ve got a great way to create an infographic, run an advertisement, or share memes that you find funny. Adding text to your visual image can help you get your message across much more effectively. But, what happens when you need to share this across different geographies with different languages?
This code pattern explains how to create a hybrid mobile app using the Apache Cordova development platform and the Node.js server application running on the IBM Cloud Kubernetes service. The app uses Tesseract OCR to recognize text in images, Watson Language Translator to translate the recognized text, and Watson Natural Language Understanding to extract emotion and sentiment from the text. The mobile app translates the recognized text from the images captured or uploaded from the photo album.
When you have completed this code pattern, you will know how to:
Find the detailed steps for this pattern in the README. Those steps will show you how to:
December 13, 2018
January 25, 2019
January 11, 2019
Artificial IntelligenceData Science+
Back to top