Good recommendations should be personalized. It’s incredible the amount of information that we can gather about our users or that is gathered about us as consumers. Retailers know our genders, location, social media likes and shares, purchase history, product view actions, cart abandon behaviors, and sometimes even the place we went to college and our household income… just to name a few pieces of personal information. Most modern recommendation engines can only handle one or two of these user “dimensions” when it comes to driving what appear to be “personalized” recommendations. But patterns and correlations exist between so many different factors that have enough math evidence to be more than a mere coincidence. For a good, personalized recommendation on an e-commerce site, retailers need an engine that can recognize those patterns and do so at a large scale.

I’ve been exploring this exact problem. I’ve been looking into how machine learning can be used to generate personalized product recommendations for a couple months now. I started my exploration looking into some existing machine learning tools built on top of Apache Spark, but nothing seemed to fit the use cases we had in our Cognitive Incubation team for IBM Watson Customer Engagement. A few internet searches later, I uncovered a couple interesting articles and a promising open source library that is engine-neutral called Apache Mahout. Just to be clear, Mahout doesn’t generate recommendations… which I’ll explain more about in a second. Instead, Mahout creates cross co-occurence (CCO) matrices that can be used to help eventually serve up recommendations. But the most intriguing part about Apache Mahout was that it promised to support multiple dimensions of factors. I had to try it out.

Getting started with Mahout was not easy. There was very little documentation or online examples from others experimenting with the tech. I ended up finding an IBMer who was a PMC committer on the Apache Mahout open source project, Trevor Grant. Over the next few weeks, Trevor became my new best friend. He was quick to answer my questions and help me wrap my head around the custom scala code we’d write to dynamically process n-number of user behavior factors as input to generate the cross co-occurence matrices. He put me in touch with Pat Ferrel, the man who wrote the “book” about Mahout; an impressive data scientist who helped to explain the math and logic behind the formulas that go into creating a CCO matrix for item similarity. This — this is why I love the open source community. The willingness to give — Giving code, giving time, giving expertise to improve a product or technology. I’m also happy to report that the Apache Mahout project is improving each day with their documentation, tutorials, and examples making it easier for new developers to get going faster. Thank you Trevor and Pat!

However, through all of their help, I somehow missed the subtle discussions about how Mahout doesn’t actually generate the recommendations. Then, one day, it all clicked — my ah-ha moment. I had gotten the code to work where I was getting the matrices, but I was staring at the output thinking… “now what?” Through some further discussions, Trevor and Pat explained how the output needs to be fed into a Lucene-based search engine like Apache SOLR that is then used to drive the recommendations. (If you want to play with IBM for enterprise search try out IBM Watson Explorer.) That’s when I got excited. By using Lucene/SOLR, businesses can add additional filters and business rules when pulling recommendations. Only want to recommend products that are in stock? products that cost less than $100? products that are in the parent category of “electronics”? That’s easy! It’s as simple as adding an AND clause in your search query.

The inventor in me started thinking about all the possibilities. There are so many industries today where personalization isn’t as rich of an experience as it could (should!) be. Where loads of untapped user behavior data is anxiously awaiting a technologist to improve the experience. I immediately thought of IoT and specifically dressing rooms.

Imagine going into a store and bringing a few items with you in to a dressing room. When you enter the dressing room, those items are automatically recognized using technologies like RFID, bluetooth, or NFC. This action of trying on an item might be considered similar to a “product view” action online. As you put items in the “don’t like” pile, that’s equivalent to a “remove from cart” action. Common features between items such as a tendency to try on shorts are monitored. Meanwhile, a tablet or smart mirror is tracking those real time behaviors and adjusts recommendations with each action. You might see a recommendation for another dress based on the one you just tried on. By clicking the “Try on” button on the tablet, a store associate brings you the dress, in your size.

The more we know about our customers, or the more developers know about their users, the more personalized our solutions can feel and the more valuable and engaging our experiences are for those end users. I’m so excited I found Apache Mahout and can’t wait to expand our personalized recommendation solution into more unexpected industries.

 

Learn more about Apache Mahout and IoT

Save

4 comments on"Machine learning, product recommendations, and IoT… Oh my!"

  1. Good discussion. Multi-domain Predictive AI is in it’s infancy with too many applications to enumerate. I like your example because it’s unexpected, not your typical “better recommender” example. Another surprising one it to augment search indexes with words that are not in the content. Who cares if “kicks” isn’t a word in the sneaker description if it leads to views? CCO can detect these cross-correlations of words that lead to views. If enough people search for ‘kicks” and view sneakers, why not augment the index with it? In fact if you don’t you may be leaving money on the table and creating a roadblock to your users?

    BTW as a committer to Mahout I can promise more examples and better docs. By way of lame excuses, we just rewrote the entire codebase from a Hadoop Mapreduce collection of algorithms to a General Linear Algebra Solving Engine that works with GPUs and runs on Spark (and other backends). But a lame excuse is better than none 🙂

    The docs for CCO are here: http://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html

  2. Great article. I’m beginning to catch-up with ML and still not decided which software should I focus on to learn. There are too many of them. Would you be able to share the source code?

    Thanks,
    V

Join The Discussion

Your email address will not be published. Required fields are marked *