Build a chatbot moderator for anger detection, natural language understanding, and removal of explicit images
Process messages and images exchanged in a chat channel using Watson services to moderate the discussions
The Build a cognitive moderator service code pattern helps you build a cloud service that can moderate a social conversation and notify users if they are using inappropriate words or sentences. It also removes images that are considered to be explicit from the conversation. The code pattern is designed for a Slack channel, but it can be applied to any other social channel or application.
In this code pattern, we used IBM Cloud Functions, a Function-as-a-Service (FaaS) platform that executes functions in response to incoming events. The service, built in Python, is deployed as an Action and uses IBM Watson Natural Language Understanding and IBM Watson Virtual Recognition.
IBM Watson Natural Language Understanding can analyze semantic features of text input like categories, concepts, emotions, entities, keywords, metadata, relations, semantic roles, and sentiment. Messages exchanged in the channel are processed using Watson Natural Language Understanding to extract the sentiment or emotion of the text. If an anger sentiment is detected, the sender is notified in the channel with a warning message.
IBM Watson Visual Recognition uses deep learning algorithms to analyze images for scenes, objects, faces, and other content, and the response includes keywords that provide information about the content. Images exchanged in the channel are processed using Watson Visual Recognition to identify and classify them. If the image is considered explicit, it is automatically removed from the channel and the sender is notified with a warning message.
Try it out and share your feedback with us!