Generate recipes using multi-ingredient aware LSTM

AI is slowly seeping into every aspect of our lives. AI can be an artist that draws the Mona Lisa, or a nurse who aids the doctor. We also know AI as a novelist–predicting letters to form words, and from words it can form sentences. However, AI knows nothing about the character and plot. Therefore, writing an essay on a topic with multiple contexts is a challenging task for AI. This short tutorial will show you how to make an AI that generates text for you based on a topic with multiple contexts.

Learning objectives

This introductory tutorial explains how to generate recipes with available ingredients, using a multi-ingredient aware LSTM network.


If you are new to neural network and text generation using LSTMs, the following links provide detailed information about everything you’ll need to understand and scale this tutorial in various applications.

Estimated time

Completing this tutorial should take about 45 minutes.


This tutorial consists of the following steps:

  • Problem statement
  • Data collection and construction for training the multi-ingredient aware LSTM
  • Methodology for constructing a multi-ingredient aware LSTM
    • Topic-averaged LSTM (TAV-LSTM)
    • Topic-attention LSTM (TAT-LSTM)
    • Multi-topic-aware LSTM (MTA-LSTM)

Step 1: Problem statement

Text generation is a fundamental and challenging task in natural language processing and computational linguistics. The task of recipe generation is even more challenging because of the following reasons:

  • It has to understand multiple ingredients.
  • It has to understand the context of all the ingredients put together to create a recipe.
  • It needs to predict sequence of steps of the recipe.

Given a set I = {ingredient 1, …, ingredient i , …, ingredient k} consisting of ingredients, recipe generation aims at generating a recipe for the user using ingredients available to them.

Example :

Sample ingredients 1 = {eggs, onion, bread}

Sample ingredients 2 = {Green tea, chocolate powder, baking soda, milk}

Step 2: Data collection and construction for training the multi-ingredient aware LSTM

  • Web Crawling: In order to guarantee the quality of the crawled text, crawl only the compositions that contain some reviews and scores.
  • Crawl websites like Epicurious and Bon Appétit.
  • Follow the same process with both and form a corpus.

The process of the data collection is summarized as follows:

  • Crawl the articles, perform content linking algorithms on the crawled articles, and pick the ones that have high scores.
  • Text summarize the articles (limit to 50 to 120 words) and store them.
  • Employ TextRank to extract keywords as topic words.
  • From the collected paragraph-level essays, randomly select them in proportion of 6:1 as training set and as test set. Name this dataset as recipes.


Step 3: Methodology for constructing a multi-ingredient aware LSTM

Topic-averaged LSTM (TAV-LSTM) – The original basic LSTM

Here, we’ll describe a topic-averaged long short-term memory (TAV-LSTM) for recipe generation. The topic semantics is represented as an average weighted summation of all topic word’s embeddings. It is capable of computing the representation of a longer expression (e.g. a sentence) from the sequence of its input words one by one, which can be viewed as an encoding process. And the decoding phase can be seen as an inverse process of encoding.

Topic-attention LSTM (TAT-LSTM)

The aforementioned TAV-LSTM model learns topic information through an average weighted summation of input topic words embeddings. That is to say, each topic word is considered in a unified way. The new model extends TAV-LSTM by introducing an attention mechanism, which scores the semantic relatedness of each topic word with the generating word and softly selects the relevant topic words to guide the model.

Multi-ingredient aware LSTM (MTA-LSTM)

Although TAT-LSTM could make better use of the topic information, it cannot guarantee the semantic of all the topic words are represented in the generated essay. Furthermore, conventional attention models like TAT-LSTM, tend to ignore the past attentional historical information. This may lead to some topic words appearing repeatedly, while others do not appear in the generated text. To address both problems, introduce a multi-topic-aware LSTM (MTA-LSTM) by adding a topic aware component on the aforementioned TAT-LSTM model. The basic idea here is to maintain a topic coverage vector where each dimension represents the degree to which a topic word needs to be expressed in future generation–adjusting the attention policy–so the model can consider more unexpressed topic words. Topic distributed information can improve the thematic integrity and readability of the generated essay.

Multi-ingredient aware LSTM


In this tutorial, we discussed a multi-ingredient-aware approach for recipe generation to ensure the recipe involves the semantics of all topic words. Compared with conventional natural language generators, like attention-based sequence-to- sequence model, this approach takes into account the multi-topic distribution. In addition, this model has the ability to generate multi-topic related and expression-coherent recipes by incorporating an attention and coverage mechanism.