Single objective optimization algorithms are the foundation upon which more complex methods, like
multi-objective, niching and constrained optimization algorithms, are built. Consequently,
improvements to single objective optimization algorithms are important because they can impact other
domains as well. These algorithmic improvements depend in part on feedback from trials conducted
with single objective benchmark functions, which themselves are the elemental building blocks for
more complex tasks, like dynamic, niching, composition and computationally expensive problems. As
algorithms improve, ever more challenging functions must be developed. This interplay between
methods and problems drives progress, so we have developed the CEC’20 Special Session on Real-
Parameter Optimization to promote this symbiosis.
Improved methods and problems sometimes require updating traditional testing criteria. In recent
years, many novel optimization algorithms have been proposed to solve the bound-constrained, single
objective problems offered in the CEC’05
[1]
, CEC’13
[2]
, CEC’14
[3]
and CEC’17
[4]
Special Sessions on
Real-Parameter Optimization. In those competitions, the maximum allowed number of function
evaluations—unlike problem complexity—did not scale exponentially with dimension. To address this
disparity, this competition significantly increases the maximum number of allowed function evaluations
for 10 scalable benchmark problems beyond their prior contest limits, with the goal of determining the
extent to which this extra time translates into improved solution accuracy.
Participants are required to send their final results to the organizers in the format specified in this
technical report. Based on these results, organizers will present a comparative analysis that includes
statistical tests on convergence performance to compare algorithms with similar final solutions.
Participants may not explicitly use the equations of the test functions, e.g. to compute gradients. This
competition also excludes surrogate and meta-models. Papers on novel concepts that help us to
understand problem characteristics are also welcome. The C and Matlab codes for CEC’20 test suite
can be downloaded from the website below:
https://github.com/P-N-Suganthan
1. Introduction to the CEC’20 Benchmark Suite
1.1. Some Definitions:
All test functions are minimization problems defined as follows:
: the shifted global optimum (defined in “shift_data_x.txt”), which is randomly
distributed in [-80,80]
D
. All test functions are shifted to o and are scalable.
Search range: [-100,100]
D
. For convenience, the same search ranges are defined for all test functions.
M
i
: rotation matrix. Different rotation matrix are assigned to each function and each basic function.
Considering that linkages seldom exists among all variables in real-world problems, CEC’20 randomly
divides variables into subcomponents. The rotation matrix for each set of subcomponents is generated
评论0