The Automated AI for Decision-Making APIs are part of IBM Research early access offerings for scaling and automating AI. The Automated AI for Decision Making (AutoDO) APIs provide access to each of the methods described in this tutorial in an easy-to-use manner.
The API service is hosted on the IBM Developer API Hub platform where you can get a free trial and try out the service with ease. Within the IBM Developer API Hub, you find information that includes an overview of the offering, a getting-started tutorial, and complete API descriptions and usage.
This tutorial shows each of the APIs by using the Python request module as well as the OpenAPI (Swagger) documentation. A Jupyter Notebook using these APIs is also available.
Steps
Step 1. Authentication
Each API request requires authentication information in the request header. The authentication credentials include the Client ID and the Client secret. To create a new credential:
Click Get trial subscription on the Automated AI for Decison-Making page.
Subscribe to a free trial by using your IBMid. If you don’t have a IBMid, create one during the process.
Log in using your IBMid, and go to the dashboard of your developer profile.
Launch the AutoDP_Trial.
Click APIs, then click Automated AI for Decision-Making.
In the Key Management, expand the drop-down list of API keys. The Client ID and Client secret are ready to use. Note that it might take up to 1 hour to see the subscribed product in the API’s page.
Step 2. Health check
To verify whether your credentials are valid and whether the Automated AI for Decision-Making service is working, you can start with the /autodo/health-check endpoint.
Go to the Health Check tab on the left of the Automated AI for Decision-Making page.
Click Try this API.
Complete the X-IBM-Client-Id and X-IBM-Client-Secret fields with the Client-ID and Client secret that you got from Step 1, then click Run request.
If the service is connected and the credentials are correct, a response with the status code 200 is returned. You can insert the code snippets of different languages into your program by selecting the language that you want in the code snippet section.
Step 3. Managing and submitting an automated decision optimization job
Submission workflow
A typical workflow consists of checking for existing jobs, submitting a new job, checking its status, retrieving the results upon completion, and optionally, deleting the job at any stage during its lifecycle (queued/running/completed). Note that AutoDO jobs take several hours to complete, depending on the complexity of the environment, the number of agents used, and the number of trials/episodes.
This section briefly describes each of these APIs in the order provided previously. All code examples are in Python3, but any other language can be used as well, including curl from the command line.
For each of the APIs, you must provide your credentials (X-IBM-Client-Id/X-IBM-Client-Secret) as described in Step 2 (Health Check). Additionally, for certain calls you must provide an email address for content filtering. In the future, this might also be used to provide the client with status updates.
The base API prefix for all calls depends on the location of the server. This tutorial assumes that you are accessing these APIs through the IBM Developer API Hub. In this case, the API prefix is https://api.ibm.com/autodo/run. All APIs for the AutoDO service are in the autodo namespace. Therefore, the full API is of the form https://api.ibm.com/autodo/run/autodo/<endpoint>.
Checking for previous submissions
To check for previous submissions, use the GET method on the /autodo/submissions?email=XYZ endpoint.
Go to List my Submissions on the left of the Automated AI for Decision-Making page.
On this page, you see how you can use curl to access the API as well as sample code for a number of different languages. Later, this tutorial goes into additional details using Python3 and requests.
The following code block imports some modules and sets up some variables that you’ll use in this tutorial.
After the one-time setup is done, calling the submissions API to check for existing submissions is done as follows.
The returned result is a list of your existing submissions, their status, and related metadata.
Submitting a new job with a named gym environment
Currently, there are two types of submissions: named gym environments and custom user environments. The named gym environments are used here to demonstrate the system and the APIs. For custom environments, you should contact the project team, and they will work with you to get your environment into the correct form and runnable in the back-end cluster.
You use a POST request to submit a basic environment, and a small set of parameters that include the environment name and a limited set of runtime options can be provided. The API endpoint is /autodo/submissions, and you can find more details from the Create Basic Submission link in the left menu on the Automated AI for Decison-Making page.
This POST request is more complicated than the typical GET or DELETE request because you must provide the JSON parameters in the body of the request. This can be done simply with the following code block (remember to do the initialization described in 'Checking for previous submissions' earlier in this tutorial.
The response is a submission UUID and the current status set to queued. Other metadata might be provided.
After the job is in the queued state, the back-end scheduler checks for sufficient resources on the cluster to run the job. When these resources are available, the job is moved into the running state. After the job is complete (whether successful or not), it is moved into the completed state. You should periodically check on the status of your job using the API in the next session.
Getting submission status
You can use the GET method on /autodo/submissions/<submission_UUID> to retrieve a specific submission’s status. For more information, see the OpenAPI specification.
The following code shows the Python3 code for this request.
As mentioned previously, this returns the current state plus other submission metadata. When complete, you can retrieve the completed results that are stored by the service.
Retrieving the top-k models and scores
The AutoDO service returns the top-k models (typically 3), their tabulated scores, and a script that you can use to load and evaluate the models in your own environment. These artifacts are packaged together into a .tar file. The URL of the .tar file is given to you through the following API (/autodo/submissions/<submission_uuid>/results). See more details on the OpenAPI endpoint.
The following code shows the Python3 code using the requests module.
You can access the provided URL through curl or Wget to obtain the results. The results are provided in a file named SUBMISSION_UUID.tgz, which is a .tar gzipped file that can be unpacked through the tar xfz SUBMISSION_UUID.tgz command. As the final step, you might want to cancel your submitted job or delete it after completion.
Canceling or deleting a submission
You can call the DELETE method on the /autodo/submissions/<SUBMISSION_UUID> endpoint, as shown in the OpenAPI documentation.
The following code shows the Python3 code for this operation.
This API call cancels and terminates any running jobs, and removes any generated artifacts.
Additional APIs
Step 4: Results: Additional detail and usage
After the .tar file is extracted, you see the following organization of the results.
custom_agents: Folders containing custom agents, which are needed to load the certain agents to perform evaluations.
eval.py: A script to load the pretrained agent and perform evaluation by performing roll-outs from the environment.
pipeline_0, ...,pipeline_k: Folders containing the top-k pipelines.
results.csv: A csv file to summarize the evaluation results of all top-k agents. This contains agent names along with mean and standard deviation of rewards during evaluation.
Organization of results
In each of the pipeline_k folders, you find a hyperparameters.json folder that contains agent-related information such as agent name and hyperparameter choices. It also contains either a best_model.zip file or a best_model folder, which are the trained models for stable_baselines or rllib agents. In particular, the structure of the pipeline_k folder can be described as follows.
To evaluate a trained agent, you can use the eval.py script. The script is equipped with following arguments.
-e or --env_name: The name of the registered environments.
--paths: The path to the pipeline_k folder. The path should point to a folder that contains the hyperparameters.json file and the saved model: either a best_model.zip or a best_model folder.
-n or--n_eval_episodes: The number of episodes to sample from the environment for evaluation.
--seed: The random seed used for evaluation.
Example 1: Evaluate an agent
The following command evaluates the trained agent in pipeline_1 for 50 episodes. Note that you normally do not need to provide the environment name because it can find the information in the hyperparameters.json file.
pythoneval.py --paths pipeline_1 -n 50
Show more
Example 2: Evaluate multiple agents
The following command evaluates agents in both pipeline_1 and pipeline_2 using 50 episodes.
pythoneval.py --paths pipeline_1 --paths pipeline_2 -n 50
The completelist of APIs can be found on the <a href="https://developer.ibm.com/apis/catalog/autodo--automated-decision-optimization" target="_blank" rel="noopener noreferrer">IBM Developer API Hub</a>. In addition to the APIs described previously, there are APIs for listing the available agents and the available environments.
Show more
Summary
This brief tutorial walked you through a typical submission workflow. As the service matures, additional API endpoints will be created and documented in the API Hub. Also, as mentioned previously, we are working on supporting customer-provided environments in a safe and secure manner. Please contact the development team for more details and early access.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.