The optimization of a design — for example, a structure, product, system, or algorithm — often relies on time-intensive computer simulations that take decision variables (design parameters) as input and then produce a measure of the performance of the design.
Finding the values for decision variables that optimize the measure of performance is challenging but can result in increased performance. However, computer simulation is often a black-box from a mathematical point of view — we do not know how to express the computation performed by the simulator as a mathematical formula. We might not have access to the source code of the simulation software and reverse-engineering the formulas from the binary files may be complicated or illegal. And even if the source code is available, the operations performed by the simulator may be too complex to be expressed by a manageable mathematical expression. We therefore do not have a mathematical description of the measure of performance, and traditional mathematical optimization algorithms such as gradient descent become impractical because they require access to such mathematical description.
Finding values of the decision variables that optimize the measure of performance is even more difficult when the evaluation of the performance of a single design candidate is time-consuming, which is often the case when complex computer simulations are required. This time commitment limits the overall number of design candidates that can be evaluated during the optimization process.
What technology problem will I help solve?
From a technical point of view, the optimization algorithms implemented in RBFOpt rely on a surrogate model of the unknown performance measure; that is, on a mathematical model that tries to learn the mapping of the inputs (decision variables) to the outputs (performance measure, to be optimized).
These algorithms are called “derivative-free” because they do not require the computation of the gradient of the performance measure, unlike traditional optimization algorithms.
A well-known class of derivative-free methods is evolutionary algorithms, such as genetic algorithms. In principle, genetic algorithms and RBFOpt aim to solve the same type of problem. However, there is a fundamental difference in the approach: in order to be effective, genetic algorithms typically have to evaluate the performance of several thousands of design candidates, which may be prohibitive when each evaluation requires a time-consuming simulation, whereas RBFOpt is conceived with the goal of requiring as few simulations as possible.
RBFOpt implements some of the most advanced techniques that the mathematical optimization community has conceived to tackle the problem we just described. It aims to find values of the decision variables that yield optimal performance while performing only a handful of computer simulations. In particular, RBFOpt provides multiple full-fledged derivative-free optimization algorithms that can be used out-of-the-box by inexperienced users, as well as a wide array of customization capabilities for advanced users. RBFOpt has been applied to many different areas, including structural engineering, daylighting optimization, information retrieval, and machine learning — and the possibilities are endless!
Why should I contribute?
There are many reasons and ways to contribute to RBFOpt. You can help extend its capabilities or provide performance. This requires good knowledge of derivative-free optimization, and some knowledge of Python, because the library is fully written in Python. Of course, extending RBFOpt can also be seen as a way to learn more about derivative-free optimization!
Even without a background in mathematical optimization there are ways to contribute to this project:
- Benchmarking and identifying strengths and weaknesses
- Creating more accessible user interfaces
- Improving the documentation
- Providing new, challenging problems that can catalyze further algorithmic ideas
- Simply by being part of the community, asking and answering questions