A cognitive system is only as good as the data loaded into the system. Loading data into a cognitive system is often referred to as an “ingestion” phase. Some systems do a single large ingestion, some do continuous ingestion, and some do a “rip and replace” series of ingestions (where ingestion n is deployed, ingestion n+1 is run later, and when ingestion n+1 is finished it replaces ingestion n). Since ingestion is the first part of a cognitive solution, it’s important to get it right. Hence a need for a functional test of the ingestion layer, which I call ingestion verification test.
An ingestion verification test suite should cover all the steps that are covered in getting raw data from a source system, converting it into an output format usable by the target system, and basic verification that the output format supports interactions that the rest of the solution will want to do with that output.
Types of ingestion processes
There are several types of ingestion processes used by cognitive systems, here are a couple that we have used and a description of how we have tested them.
Local document extract/transform/load (ETL)
One solution read XML files off of a local disk, used a series of parsers to extract key data, and stored this data in an output structure (ultimately in a database). The test suite included JUnit tests of the parsers (given an input file, did the output format have field values X, Y, Z?). Most of the tests used Mockito to stub out the database, but a handful of tests verified output at the database level. (Even at the component level, we take advantage of the testing pyramid.)
Remote service ETL
Like the local document solution above, except that documents were read off of remote systems via web service calls. In addition to the parsing tests above (with and without stubbed database), we added tests to verify the web service calls. We tested mostly with a stub of the remote web service calls, to verify that the right URLs were accessed and the right parsers called, but we also used a stub of the web service itself (which simply returned static documents) to verify our usage of HttpClient libraries was correct.
Building a text index
Some of our solutions have a Solr index at the heart. A Solr index allows you to do a variety of search queries against a collection of documents (generally referred to as a corpus). Our test suite for this index involved ingesting a small batch of well-known/curated documents and executing a series of Solr queries to verify which documents were returned. The queries covered all the interesting searches used by our application, including searching by case insensitivity (does search for “andrew” return documents with “Andrew”?), numeric attributes (search for documents with greater than 50 citations), and entity searching (does search for “animals” return documents with “cat” and “dog”)
Smoke testing the ingestion
Reiterating the test pyramid notion, it is very useful to have a smoke test suite for your ingestion test suite. The smoke tests I have used in ingestion are very simple. Ingest one (or a few documents). Do you have a database produced, and does it contain any rows? Or, do you have a text index produced, and are there any documents in it? As with our other smoke test suites, the important thing is a quick, non-brittle test. Test for non-zero, not a specific number of outputs – other tests can verify the right number.
Setting up a good ingestion verification test suite
As an ingestion developer, it may be fun to talk about your ability to ingest millions of documents, or the blazing fast speed of your ingestion process, but the ingestion process is no good if it does not support the needs of the application. The ingestion test suite should certainly prove that the ingestion worked functionally, but it must include a basic exercise of the ingested output as the full application will do at runtime. Talk with the rest of the solution team, find out what kinds of queries they want to run against your ingestion output, and verify that the ingestion output supports those queries.
In case of errors
The severity of ingestion test suite errors is determined by what type of ingestion process you have. If you have continuous ingestion, an ingestion test suite failure may be a Severity 1 – System Down situation. For a rip-and-replace ingestion process, an error simply means you have to stay on the previous successful ingestion a little longer. No matter the severity, the same troubleshooting techniques described in our smoke test post make sense here – scan the logs and notify the right people.
The ingestion process runs at the beginning of any cognitive system and errors in ingestion may prevent the rest of the solution from having access to data. Thus treat ingestion verification testing as a critical part of your testing plan. The ingestion verification test needs to focus not just on the functionality of the ingestion process but that the ingestion process produces output that is usable by the rest of the cognitive system by supporting all the major query types that system is known to execute.