RBFOpt V2.1.0 is now available on GitHub!

This new version was — as usual — months in the making and it’s finally ready to move from development to release. From the outside, it’s identical to the previous one: there are no massive updates to the interface, or even new algorithmic features worthy of mention. However, the core of the algorithm has been almost entirely rewritten to improve the time per iteration. Here’s what happened, and why.

## Scaling up

Initially, RBFOpt was not intended to handle large-scale problems: in the context of derivative-free optimization, global optimization of functions with just a handful of variables is already *very* difficult, so I was not envisioning working on problems with tens or hundreds of variables. Furthermore, since the objective function evaluations typically require expensive computer simulations, the time spent inside RBFOpt is negligible compared to the total computing time. For this reason, the implementation of many of the calculations was naive, and certainly not as efficient as it could be.

Over time, my perception changed. Working with graduate students at the Singapore University of Technology and Design, I quickly realized that we could make RBFOpt more widely applicably by improving the time per iteration. Thus, Giorgio Sartor (a PhD student and soon to be a graduate of SUTD), embarked on the bold task of rewriting the core of the library to rely on NumPy for all calculations. At the same time, after profiling the algorithm, we decided that using Cython — that is, compiled code — for the most frequently used parts of the code could be even better than NumPy.

## Same algorithm, faster calculations

After working together on this for some time, we’ve produced RBFOpt V2.1.0. If Cython is available and the user is willing to compile, RBFOpt will use Cython modules; otherwise, it will rely on the NumPy implementation. While the optimization algorithm has not changed, the calculations required at each iteration are simply executed *faster*. The graph above shows the average time per iteration (in seconds) depending on the size of the problem: already the NumPy implementation cuts in half the time per iteration, and the Cython implementation further improves this.

If you are using RBFOpt for time-sensitive applications, I definitely recommend checking out the new version.

## A thank-you, and an invitation

Many thanks to Giorgio Sartor for his fundamental help with this release. Hopefully in the future I will have an opportunity to make a post about his great work on applying derivative-free optimization to machine learning problems!

As always, I’m eager to hear about your experience with RBFOpt — leave a comment in the space below or connect with me on the RBFOpt mailing list.