
In this article, I will outline the implementation of the Ant Colony Optimization (ACO) algorithm (with sample code) and how to use it to solve the optimization (minimization) of some common benchmark Continuous Domain functions. ACO algorithm belongs to the family of the so-called nature inspired metaheuristic algorithms that are gaining some popularity in the machine learning domain.
The structure of this article is as follows:
- Introduction
- Problem Specification
- ACO Algorithm
- Code Implementation Details
- Benchmark Test Results
- Concluding remarks
- References
- Introduction
Global optimization is a branch of mathematics and computer science that develops algorithms that can be used to find the global minima or maxima of continuous domain functions or a set of functions for a given dataset [1]. Optimization problems of all sorts arise in a number of quantitative disciplines. Some examples of these in Machine Learning (ML) include:
- Hyper-parameter optimization of ML algorithms [2]
- ML algorithm feature selection [3]
- Loss/cost function optimization for deep learning or classical ML algorithms [4, 5]
- Neural network optimal architecture design [6]
- Best path optimization – such as the Traveling Salesman Problem [7]
Optimization problems, in general, involve minimization (or maximization) computation of a continuous N-dimensional function by iteratively selecting N inputs within their bounds and calculating the function value.
Global optimization algorithms can be broadly categorized as follows:
- Deterministic global optimization [8]
- Metaheuristic global optimization [9]
ACO is a Nature Inspired metaheuristic optimization routine and this article will focus primarily only on this algorithm. There are, however, a plethora of other nature inspired metaheuristic optimization algorithms, some of these include:
- Simulated Annealing
- Genetic Algorithms
- Particle Swarm Optimization
- Artificial Bee Colony Optimization
- Multi-verse optimization
- Bat Algorithm Optimization
- Firefly Algorithm
- and a whole lot of others, further details can be found in [10]
I will explore some of these metaheuristic algorithms in my future articles. The next part of this article will describe the problem specification.
2. Problem Specification
The problem specification is to use the ACO algorithm to solve the minimization of 5 benchmark classical continuous domain functions. These functions are:
- Rosenbrock with properties [11]:
- Number of dimensions, d = 10
- Xi ∈ [-5.0, 10.0], for all i = 1, …, d
- f(X) = 0.0 at X = (1.0, …,1.0)
- Ackley with properties [12]:
- Number of dimensions, d = 10
- Xi ∈ [-32.768, 32.768], for all i = 1, …, d
- f(X) = 0.0 at X = (0.0, …,0.0)
- Sphere [13] with properties:
- Number of dimensions, d = 10
- Xi ∈ [-5.12, 5.12], for all i = 1, …, d
- f(X) = 0.0 at X = (0.0, …,0.0)
- Styblinski-Tang with properties [14]:
- Number of dimensions, d = 10
- Xi ∈ [-5.0, 5.0], for all i = 1, …, d
- f(X) = -39.16599d at X = (-2.903534, …, -2.903534)
- Rastrigin with properties [15]:
- Number of dimensions, d = 10
- Xi ∈ [-5.12, 5.12], for all i = 1, …, d
- f(X) = 0.0 at X = (0.0, …,0.0)
3. ACO Algorithm
The pseudo code for the ACO algorithm used to solve the minimization problems described in the previous section is outlined below:

For example, the Ackley problem (described in section 2) has 10 variables (d = 10) of Xi, and each variable had a continuous domain bound: Xi ∈ [-32.768, 32.768]. The 3D plot of the Ackley function is depicted below (i.e., naturally the plot was for dimension d = 2):

The algorithm input/outputs were:


The desired values for ‘Ackley’ problem objective function are:
f(X) = 0.0 at X = (0.0, …,0.0)
The results demonstrate that the ACO algorithm has done a good job in the optimal minimum search. The convergence plot of the ACO computation is depicted below:

4. Code Implementation Details
Implementation of the ACO algorithm was coded in Python and the Github repo for the code can be found here. This code is partly a python port of the Matlab implementation provided by [16] and an interpretation of a thesis on ACO swarm intelligence [17]. The core component used to invoke the ACO algorithm is the AcorContinuousDomain class. This class can be constructed by providing the following arguments to a constructor:
- Number of ant population size (__n_pop)
- Number of variables/dimensions of the continuous function being optimized (__n_vars)
- Cost function being optimized (__cost_func)
- Upper and lower domain bounds of the cost function being optimized (__domain_bounds)
as depicted in this code snippet:
To invoke ACO algorithm main loop and get the optimal solutions, use this code snippet:
Note that the ACO code utilizes helper classes ProblemConstants and AcoConstants to specify the problem domain use-cases and hyper-parameter configuration as depicted in the following code snippets:
The code path used to the demo the ACO algorithm can be found in the Client.py module. This module incorporates the runAcoClient() function which can be used to demo the ACO solution of a specific problem use-case as shown in the following snippet:
5. Benchmark Test Results
In addition to the results obtained for the Ackley continuous domain problem/function (described in section 3). This section outlines the test results obtained for the 4 other continuous domain problems/functions (described in section 2), namely:
- Rosenbrock
- Sphere
- Styblinski-Tang
- Rastrigin
Note that these functions were solved using the ACO algorithm with number of dimensions/variables set to 2. The 3-d plots for these functions are depicted here:




The ACO algorithm inputs used and results outputs for each problem are outlined here:




6. Concluding remarks
This article introduced the audience to the use of the nature inspired metaheuristic technique of Ant Colony Optimization (ACO). In this case, used for the minimization (or maximization) of multivariate continuous functions.
It provided a step-by-step demonstration of how to the implement ACO algorithm in python. It also demonstrated the impressive convergence properties of the ACO algorithm on a set of classical benchmark continuous domain optimization problems (when the appropriate ACO hypermeters are configured).
In my future articles, I will be exploring the use of other nature inspired metaheuristic algorithms for the optimization of continuous domain problems.
7. References
[1] Mathematical optimization (2021), Wikipedia
[2] Hyperparameter Optimization (2021), Wikipedia
[3] Feature Selection (2021), Wikipedia
[4] H. Martin, et al., Metaheuristic Algorithms for Convolution Neural Network (2016), Computational Intelligence and Neuroscience
[5] Z. Ahmed, Train Neural Network (Numpy) – Particle Swarm Optimization (PSO) (2020), Medium
[6] B. A. Garro and R. A. Vazquez Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms (2015), Computational Intelligence and Neuroscience
[7] V. Chandra Site Selection and Best Path using Optimization Techniques – Canada Post Example (2020), Medium
[8] Deterministic global optimization (2020), Wikipedia.
[9] Metaheuristic (2021), Wikipedia
[10] F. Aljarah et al., EvoloPy: An Open-source Nature-inspired Optimization Framework in Python (2016), In Proceedings of the 8th International Joint Conference on Computational Intelligence (IJCCI 2016) – Volume 1
[11] S. Sonjanovic and D. Bingham., Virtual Library of Simulation Experiments: Test Functions and Datasets – Rosenbrock function (2013), Simon Fraser University
[12] S. Sonjanovic and D. Bingham., Virtual Library of Simulation Experiments: Test Functions and Datasets – Ackley function (2013), Simon Fraser University
[13] S. Sonjanovic and D. Bingham., Virtual Library of Simulation Experiments: Test Functions and Datasets – Sphere function (2013), Simon Fraser University
[14] S. Sonjanovic and D. Bingham., Virtual Library of Simulation Experiments: Test Functions and Datasets – Styblinski-Tang function (2013), Simon Fraser University
[15] S. Sonjanovic and D. Bingham., Virtual Library of Simulation Experiments: Test Functions and Datasets – Rastrigin function (2013), Simon Fraser University
[16] M. K. Heris., ACO for Continuous Domains in MATLAB (2015), Yarpiz
[17] I. C. J. Riadi., Cognitive Ant Colony Optimization: A New Framework in Swarm Intelligence (2014), Phd Thesis, University of Salford, Manchester, UK
Github
The complete code for ACO implementation (including unit tests) can be found here.
Thank you for reading this article!
You can connect with me at Github