Nested Named Entity Tagger


This model annotates each word or term in a piece of text with a tag representing the entity type, taken from a list of 145 entity tags from the GENIA Term corpus version 3.02. A detailed list of all entites can be found here

The model consists of a seq2seq architecture with a bi-directional LSTM layer as an encoder applied to character-level embedding vectors, which are combined with pre-trained word2vec and pre-trained binary FastText word vector embeddings; The contextualized embeddings (BERT, ELMo, Flair) have been generated using the FlairNLP library. The per-token BERT contextualized word embeddings are created as an average of all token corresponding BERT subwords. Under the hoods Flair uses the pretrained BERT Large Uncased weights. Finally an LSTM decoder layer is applied to this combined vector representation for generating the named entity tags. The input to the model is a string and the output is a list of terms in the input text (after applying simple tokenization), together with a list of predicted entity tags for each term.

The model architecture is based on the Jana Strakova’s Neural Architectures for Nested NER through Linearization. The model files are hosted on IBM Cloud Object Storage and the model was trained by the IBM CODAIT team.

Model Metadata

Domain Application Industry Framework Training Data Input Data Format
Natural Language Processing Nested Named Entity Recognition BioTech Tensorflow Genia Corpus Text



Component License Link
This repository Apache 2.0 LICENSE
Model Weights Apache 2.0 LICENSE
Model Code (3rd party) Mozilla Public 2.0 LICENSE
Test samples Apache 2.0 LICENSE samples README

Options available for deploying this model

This model can be deployed using the following mechanisms:

*Deploy from Quay

To run the docker image, which automatically starts the model serving API, run:

$ docker run -it -p 5000:5000

This will pull a pre-built image from the container registry (or use an existing image if already cached locally) and run it.

*Deploy on Red Hat OpenShift

You can deploy the model-serving microservice on Red Hat OpenShift by following the instructions for the OpenShift web console or the OpenShift Container Platform CLI in this tutorial, specifying as the image name.

*Deploy on Kubernetes

You can also deploy the model on Kubernetes using the latest docker image on Quay.

On your Kubernetes cluster, run the following commands:

$ kubectl apply -f

The model will be available internally at port 5000, but can also be accessed externally through the NodePort.

A more elaborate tutorial on how to deploy this MAX model to production on IBM Cloud can be found here.

Example Usage

You can test or use this model

Test the model using cURL

Once deployed, you can test the model from the command line. For example if running locally:

$ curl -X POST -H 'Content-Type: application/json' -d '{"text":"The peri-kappa B site mediates human-immunodeficiency virus type 2 enhancer activation, in monocytes but not in T cells."}' 'http://localhost:5000/model/predict'

You should see a JSON response like that below:

  "status": "ok",
  "predictions": {
      "entities": [
        "input_terms": [