A person’s emotions often significantly impact his or her behavior and performance in the real world. As such, understanding emotions is a key component of effective communication and understanding between individuals.

We are pleased to announce IBM Watson Emotion Analysis – a new beta function that detects joy, anger, disgust, sadness, and fear in text. IBM Watson Emotion Analysis is the latest function to join the IBM Watson AlchemyLanguage service on the Watson Developer Cloud.

What emotions does Emotion Analysis detect?

Joy: Joy or happiness has shades of enjoyment, satisfaction, and pleasure. There is a sense of well-being, inner peace, love, safety, and contentment.  

Fear: Fear is a response to impending danger. It is a survival mechanism that is a reaction to some type of negative stimulus. It may be a mild caution or an extreme phobia.

Sadness: Sadness indicates a feeling of loss and disadvantage. When a person can be observed to be quiet, less energetic, and withdrawn, it may be inferred that sadness exists.

Disgust: Disgust is an emotional response of revulsion to something considered offensive or unpleasant. It is a sensation that refers to something revolting.

Anger: Anger is evoked due to injustice, conflict, humiliation, negligence, or betrayal. If anger is active, the individual attacks the target, verbally or physically. If anger is passive, the person silently sulks and feels tension and hostility.

Emotion

How Can My Business Use Emotion Analysis?

Identifying emotions programmatically can be useful in a variety of applications, including the following:

  • Product Feedback and Campaign Effectiveness: Monitor the emotional reaction of your target audience for your products, campaigns, and other marketing communications. 
  • Customer Satisfaction: Analyze customer surveys, emails, chats, and reviews to determine the emotional pulse of your customers. 
  • Contact-center Management, Automated agents, and Robots: Detect emotions in chats or other conversations and adapt to provide an appropriate response. For instance, direct a customer to a human agent if intense anger is detected.

How does it work?

We have developed an ensemble framework to infer emotions from a given text. Currently, our model works on English language text. To derive emotion scores from text, we use a stacked generalization-based ensemble framework. Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In the proposed ensemble framework, each lower-level model differs in terms of feature extraction processes and methods of classification. Each lower-level model first tokenizes the text and extracts features from tokens using n-grams (unigrams, bigrams, and trigrams), punctuation, emoticons, curse words, greeting words (such as hello, hi, and thanks), and sentiment polarity. After that, we apply them to the state-of-the-art machine learning-based models. Such models are trained using a variety of supervised machine learning approaches on a variety of datasets consisting of emotional statements that have been annotated by humans.

We report emotion scores for each given text in terms of the percentage of each emotion category: anger, disgust, fear, joy, and sadness. See the following example.

Given text: “the day I was told that I had been accepted as a student of economics.”

Output emotion: [anger: 0.045, disgust: 0.023, joy: 0.93, fear: 0.035, sadness: 0.07]. Based on this, we can infer that this text expresses joy with 93% confidence.

How Accurate is Emotion Analysis?

We have conducted experiments on three datasets to determine the accuracy of our emotion estimation approach. We report the most frequently used metric, Macro-average F1 score, to report the performance. On sentence datasets, such as SemEval and ISEAR, our results show that the average performance of our ensemble model (macro-average F1 score is around 41% and 68%, respectively) is statistically better than the top reported accuracy by the state-of-the-art models (whose Macro-average F1 scores are around 37% and 63%, respectively). Similarly, on our Twitter dataset, based on the ground truth of human coders, our model performs significantly better than baselines (30% macro-average F1 score vs. 25% by the state-of-art models).

This is the current state of our work at the time of releasing this beta. We are continuously improving our models and look forward to releasing enhanced models in the future.

The beta API is currently available for English text input. More details about this service, the science behind it, how to use the APIs, and example applications are available in the documentation. You can also try out the service at this demo link.
__________________________________________________________________________________________________

About the author:
Jalal Mahmud is a Research Staff Member at IBM Almaden Research Center and currently manages the Personality Analytics research group under IBM Watson Innovation. Dr. Mahmud joined IBM in Oct, 2008. His research interests include Computational User Modeling, Intelligent User Interface, Web Analytics, Social Media Analysis, Machine Learning and Data Mining. Dr. Mahmud earned his Ph.D. in Computer Science from the Department of Computer Science, from the State University of New York at Stony Brook. He received a MS degree in Computer Science from the same department in 2006 and B.SC in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET). He has extensively published papers in top rated conferences and journals. He received best paper nominations in top conferences such as WWW 2006, WWW 2007, IUI 2012, IUI 2013 and IUI 2014. Dr. Mahmud is an IBM Master Inventor. He has 22 issued patents, filed 16 other patents, and received 9th in the Platau Invention Achievement Award. He is also a member of IBM Academy of Technology.

Join The Discussion

Your email address will not be published. Required fields are marked *