I’m extremely happy to announce RBFOpt V3.0.1, now available on GitHub! There are multiple improvements that make it, as usual, the best version of RBFOpt yet! (Until the next one, of course.)
Finally(!), RBFOpt fully supports Python 3. It works seamlessly with Python 2.7, and 3.4 or higher. But the biggest changes are under the hood, with a new and — in my tests — effective local search strategy and automatic model selection strategy.
- The new local search is a version of a trust region method, which, for those of you who don’t spend a lot of time reading mathematical programming papers, is an optimization method that keeps a local model of the (unknown) objective function. The size of the region over which this local model is considered accurate is automatically updated during the course of the algorithm. In principle, this allows much faster convergence to local optima, and of course this local search is embedded in the usual global search strategy of RBFOpt.
- The model selection strategy periodically determines the most accurate radial basis function model of the objective function by looking at how different models rank points, rather than how they predict function values. Thanks to a very useful discussion with IBM’s Sanjeeb Dash, the model selection algorithm is now significantly faster. This should prove once and for all that having a colleague who has worked on a linear programming solver is the best thing that can happen to you.
Some of the code has been refactored and the parallel search has been made more stable. Because of the refactoring, version 3 of RBFOpt is not compatible with version 2. However, you should require very few changes to existing code based on version 2. The changelog provides more details on what was changed. Also, thanks to several code improvements, iterations now run faster, which should make the algorithm more scalable.
To measure the performance of different algorithms, I always look at several graphs generated on a set of my favorite 42 problems with different characteristics. I then run the algorithm 20 times on each problem, with different random seeds, and look at the median performance on each instance. There are several methods to aggregate the data generated this way; one method commonly used in the derivative-free optimization community is based on performance and data profiles, described in a paper by Jorge More and Stefan Wild. Performance profiles report performance ratios of algorithms on a set of instances. In particular, a point with coordinates (x, y) on a curve for method M in a performance profile indicates that for a fraction y of the instances, the method M solves the problem within x times the number of iterations as the best performing method. Performance profiles are constructed for a given accuracy.
Long story short: in a performance profile, higher curves are better than lower curves. Here you can see performance profiles for the current and the previous version of RBFOpt, requiring a convergence accuracy of 0.01 and 0.0001. These graphs are so beautiful because RBFOpt V3.0.1 works a lot better than V2.1.0! This always gives me a warm, fuzzy feeling.
I hope you find the improvements in the new version of RBFOpt useful. As always, I am happy to hear comments and feedback!