Think 2021: New tools have the developer ecosystem and IBM building together Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Not just for rocket scientists


Introduction

IBM Bayesian Optimization Accelerator (BOA) is a do-it-yourself toolkit to apply state-of-the-art Bayesian inferencing techniques and obtain optimal solutions for complex, real-world design simulations without requiring deep machine learning skills. Fascinating results have been achieved by using this tool on chip design, drug discovery, Formula 1 car design, and even wine quality optimization.

What follows is a hypothetical conversation between an IBM researcher and his intern about the Bayesian optimization method, IBM differentiation, its ease of use, and how IBM Lab Services is helping organizations take advantage of this innovative solution.

Researcher: Modeling and simulation has traditionally relied on advancements in high performance computing (HPC) for high-fidelity models. However, it would be hard for HPC community alone to meet the ever-increasing demand for additional degrees of freedom. Our BOA solution is right on target for the current modeling and simulation landscape.

Intern: Bayesian optimization is one of those methods for hyperparameter optimization in machine learning, such as finding the optimal learning rate for training a neural network, isn’t it?

Researcher: Well, it’s used for hyperparameter optimization and also automatic machine learning (AutoML). But those are just two applications among many. It is a machine learning algorithm in itself. It belongs to the broader family of Bayesian methods for machine learning.

Intern: What else can it be used for?

Researcher: Any arbitrary black box function optimization.

Intern: What do you mean?

Researcher: Think of an arbitrary design simulation that takes from several minutes to a few hours for execution on state-of-the-art compute infrastructure. (See the right of Figure 1.) Say this design has certain input parameters that need to be set by the user. Each of these design parameters can take a range of values. The user may have some intuition about how the simulation output changes with input parameters but cannot precisely express that input-output relationship. The user is trying to find the parameter values for which the simulation produces optimal output. For example, an engineer is minimizing the drag of an F1 car would want to run an air flow simulation program with different dimensions of a front wing component.

It is an optimization problem for which the objective function is a black box! Such situations occur time and again in science and engineering. Typically, engineers manually tweak these parameters until a satisfactory result is obtained. Bayesian optimization gives engineers a break by automating this parameter tweaking process using a sophisticated mathematical formulation. (The algorithm is shown on the left side of Figure 1.)

Figure 1. Bayesian optimization is a sophisticated mathematical technique for design space exploration in science and engineering applications
fig1

Intern: What is this method doing to get the optimal result? It is filled with complicated mathematical terms such as surrogate models, acquisition functions and posterior inferencing.

Researcher: Glad you asked for it.

The underlying math is incredibly simple. The objective function here is not differentiable. It has no analytical expression but can be evaluated at a given point. We are defining our prior belief about this function using a surrogate model, which is typically a Gaussian process with a kernel function. These surrogate models are capable of representing any arbitrary function and are cheaper to evaluate than the original function.

The next ingredient is an acquisition function defined over the surrogate. This function indicates the usefulness of evaluating each point in the domain. The choice of the acquisition function such as expected improvement, probability of improvement, and upper confidence bound allows a trade-off between exploration (reducing uncertainty) and exploitation (improved solution). These acquisition functions can also be substituted with Markov Chain Monte Carlo sampling methods.

In each iteration, the candidate points for evaluating the original function are selected by optimizing the acquisition function or the maximum values from the random samples. We then perform a posterior inferencing (Gaussian process regression) on the surrogate. You see that approach is Bayesian, hence the name. After a few iterations, there’s a good chance to end up with a better result than that obtained by other methods such as grid search and random search.

Intern: Do you expect me to follow what you said and continue this discussion?

Researcher: Wait! That’s the whole point. IBM BOA hides all this complexity under an easy-to-use API and default choices. In addition, there’s a graphical interface that makes life even easier.

Intern: Okay. BOA is a black box that optimizes another black box. Is it just another optimization toolkit then?

Researcher: No. There are several innovations that set it apart from existing implementations (such as scikit-learn and Hyperopt) — dimensionality mitigation by compressed sensing, smart initialization, novel acquisition functions, and explainability of selected parameter configurations, just to name a few. These innovations extend the applicability of Bayesian optimization to various design problems in an unprecedented manner.

Intern: I’d like to use it on my client’s design simulation. How much work is involved?

Researcher: Not a whole lot. You write a software interface to communicate with your client’s design simulation black box. The BOA SDK makes interfacing to a black box easier than it sounds. There is a public GitHub with sample interface implementations which you can download and modify to meet your requirements. Lastly, you specify the choice of your surrogate, acquisition function, and a few other associated parameters.

Intern: That’s still a lot of work. Sounds like it also requires some understanding of the Bayesian optimization method. Didn’t you say BOA isn’t just for rocket scientists?

Researcher: A little understanding is helpful but not required. The default choices built into the examples are a good starting point. We recommend the approach of “If you don’t know what a parameter is – leave it alone”. After a few BOA experiments, the terms and parameters start to make good intuitive sense.

Further, IBM Lab Services has a team of experienced consultants to assist our clients who wants a quick start or wants to get to the best results possible very quickly. The team offers a range of services including setup, technical training and best practices, case studies, brainstorming high impact use cases, writing interface functions and further optimization expertise.

IBM Bayesian Optimization Accelerator was released on November 27, 2020, for general availability on IBM Power Systems hardware. To learn more, or for help with your project, contact IBM Lab Services today.

Use case references