FLW-HD: Optimisation based on Follow-the-Leader and random Walk in High Dimensions

FLW-HD: Optimisation based on Follow-the-Leader and random Walk in High Dimensions preview image

1 collaborator



Tagged by Sergio Rojas-Galeano about 1 month ago


Tagged by Sergio Rojas-Galeano about 1 month ago

swarm algorithm 

Tagged by Sergio Rojas-Galeano about 1 month ago

Visible to everyone | Changeable by the author
Model was written in NetLogo 6.2.2 • Viewed 185 times • Downloaded 2 times • Run 0 times
Download the 'FLW-HD: Optimisation based on Follow-the-Leader and random Walk in High Dimensions' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


This is an agent-based model designed to search for the global minimum (or optimal) solution of a continuous value optimization problem (or cost function) defined in a high-dimensional search space (i.e., one with many variables or dimensions, d > 2).

Given a real-valued mathematical function, optimization refers to the task of finding a set of input values (or solution) that minimize the corresponding output value of that function. Many algorithms have been designed to search for the minimal solution within an input domain defined by a set of bounded constraints (also known as a search space). The number of input variables determines the dimensions or dimensionality of the search space. Algorithms that only use the evaluation of the function to guide the search in said space, without relying on other of its mathematical properties, are called metaheuristics.

This model implements a metaheuristic to optimize real-valued problems in continuous domains using the Netlogo language. The optimization algorithm operates on the basis of two search operators, Follow the Leader and Random Walk, and is capable of solving problems in High Dimensional spaces (d > 2) (hence its name, FLW-HD). To read more about the technical details underlying the algorithm, the reader is referred to the paper [1].


The algorithm works as a multi-agent system, where each agent explores a location in the search space consisting of a vector of coordinates in each dimension (x1, x2, ..., x_d), and the value of the cost function evaluated at said location. Since the goal of the algorithm is to minimize the cost function, the lower the value of an agent, the higher its quality as a candidate solution for the chosen problem. Furthermore, each agent can belong to one of three subsets: LEADERs (agents that guide the search), FOLLOWERs (agents that follow their leaders), or WALKERs (independent scouting agents).

In each iteration of the algorithm, the agents explore the search space according to their roles. They are initially located at random locations within the search space. The algorithm is assigned a budget of evaluations of the cost function (MAX-EVALS); an evaluation is calculated (and therefore discounted from the budget) each time an agent changes its location. The execution ends when MAX-EVALS is reached, or if the optimum of the problem is discovered first.

The FLW-HD algorithm search strategy consists of a combination of the following operators, which can be enabled/disabled at any time during the algorithm execution:

  • Follow-the-leader: LEADERs are high-quality candidate solutions, and therefore each LEADER has an (equally larger) pool of FOLLOWERs. FOLLOWERs try to resemble their LEADERs by jumping within close range of their locations with random step lengths (local exploitation). When a FOLLOWER discovers a more promising region of the search space (a location with a lower cost function value), he becomes the LEADER of his group (in practice, since all agents are homogeneous, they simply swap places). The algorithm supports single-leader or multi-leader modes of operation. The TOP-LEADER would be the best among all the leaders in a particular iteration. The final solution reported by the algorithm would be the BEST-EVER location (the one with the lowest cost function value) discovered by any TOP-LEADER agent during the entire run.

  • Random Walk: WALKERS move randomly around their current positions without any guidance (global exploration). When a WALKER accidentally becomes a more promising candidate solution than the current LEADERs, the TOP-LEADER is asked to move into that position. The length of the WALKER movement is determined by a normally distributed random step with SIGMA deviation, that can be sampled uniformly randomly or adaptively adjusted by the agents (depending on the strategy defined by the STEP-SAMPLING chooser).

  • Elitism: this component aims to intensify the search around the best candidate solution discovered so far. The model allows three strategies to define the elite (as determined by the ELITISM chooser): select one WALKER to be relocated to the location of the BEST-EVER candidate solution, or move all WALKERS to said location, or finally, relocate all the WALKERs and all the LEADERs to such location.

  • Warm Restarts: to prevent premature convergence to local minima, when the average diversity within the groups of LEADERs/FOLLOWERs is too small or when agents have aged a predefined number of iterations, the coordinates of the entire population are reset to random locations within the search space, according to the strategy defined in the LOCATION-SAMPLING chooser: either UNIFORM, where each coordinate is independently sampled with a uniform random distribution within the bounds of the search space, or LATIN-HYPERCUBE, where the coordinates are randomly sampled in such a way that the agents do not intersect in any of the dimensions, resulting in a more evenly dispersed coverage of the search space. Note that the chosen strategy will also be used for the initialization of the population at the beginning of the simulation.


First, select one from the suite of available testbed real-valued cost functions using the PROBLEM chooser (SPHERE, RASTRIGIN, ROSENBROCK, HIMMELBLAU, EGGHOLDER or variations; HIMMELBLAU and EGGHOLDER are defined for 2D search spaces only, whereas for the other functions you can set the dimensionality of the search space with the DIMENSIONS chooser; see TESTBED PROBLEMS section below for further information).

Then choose the number of search agents with the POP-SIZE slider, the percentage of WALKERS in the population with the WALKERS-RATE slider, and the number of N-LEADERS within the population. Also, choose the function evaluations budget with the MAX-EVALS slider. For example, to solve the RASTRIGIN problem in a 30D search space with a population of 20 agents, including 4 leaders with 3 followers each, plus 4 walkers, set up the simulation parameters as follows:

  • PROBLEM: rastrigin


  • MAX-EVALS: 300000 (by default, this is set to DIMENSIONS x 10000)

  • POP-SIZE: 20

  • N-LEADERS: 4


  • ELITISM: any walker


  • LOCATION-SAMPLING: latin hypercube

  • STEP-SAMPLING: adaptive

Now you can run the algorithm using the buttons in the control panel:

  • SETUP: Creates the population, locates the agents randomly within the limits of the search space according to the chosen LOCATION-SAMPLING strategy, and initializes the algorithm global variables.

  • STEP: Executes an iteration (a single tick) of the main procedure of the algorithm.

  • GO: Repeatedly executes the main procedure of the algorithm until finding the optimum or until reaching MAX-EVALS.

  • RESET: Relocates the population of agents using the LOCATION-SAMPLING strategy and clears the SIGMA-EFFECTIVENESS histogram (SIGMA refers to the step length used in WALKERs movements, as described above).

Besides, remember that the SPEED slider at the top of the window can be used to control how fast the display area is updated on each iteration (note that the update operation slows down the execution of the algorithm). Disable VIEW-UPDATES for maximum speed.

Once you start running the simulation, the output monitors allow you to inspect the algorithm's emergent behavior as the search evolves:

  • OPTIMUM: The value of the problem cost function evaluated at the optimal solution (only for validation purposes, so this information is not revealed to the algorithm).

  • BEST-SOLUTION-EVER: The value of the problem cost function evaluated in the best TOP-LEADER ever found during execution.

  • CURRENT-TOP-LEADER: The value of the problem cost function at the TOP-LEADER coordinates in the current iteration.


  • VALUE-OF-LEADERS: Historical plot of the leaders' values throughout the simulation. A different colour is used for each leader.

  • VALUE-OF-BEST-SOLUTION-EVER: Historical plot of the value of the best solutions discovered throughout the simulation. Note that the range of the plot is clipped regularly to allow better visualization of changes in the scale of the discovered solutions.

  • DIVERSITY-PER-LEADER: Historical plot of the average diversity measure of the groups of agents within the population. A different color is used for each leader.

  • SIGMA-EFFECTIVENESS: Histogram that shows to what extent each step deviation of WALKERs movements has contributed to improve the value of the best solutions (each bar corresponds to the following deviations: 10, 1, 0.1, 0.01, 1E-3, 1E-4, 1E-5). Notice that the plot is cleared each time the RESET operation is carried out (either automatically or when activated with the corresponding button).

  • MEAN-DIVERSITY: The average diversity of all groups led by leaders.

  • RUNTIME: Total execution time of the simulation run (in seconds).

  • EVALUATIONS-COUNT: Counter of the overall evaluations of the cost function carried out by the algorithm up to the current iteration. Note that if the algorithm can reach the optimal value (and thus stop before MAX-EVALS), these last two monitors will add a "!!!" sign to the values in their text boxes.

  • VALUE OF LEADERS: A text box that shows the value of the cost function for the location of each leader.


The mathematical expressions of these cost functions are defined in the COMPUTE-VALUE procedure, found in the CODE tab. In addition, the bound constraints for each dimension in the input space of each function are defined in the SETUP-PROBLEM procedure, also in the CODE tab, as follows:

  • SPHERE, SHIFTED-SPHERE: [-5.12, 5.12].


  • ROSENBROCK: [-2.048, 2.048].

  • HIMMELBLAU: [-5.12, 5.12].

  • EGGHOLDER: [-512, 512].

The ground-truth optima of these problems are also defined in the SETUP-PROBLEM procedure, as follows (we note that this information is hidden from the algorithm, since it is only used to evaluate its performance):

  • SPHERE: (0, 0, ..., 0) (the optimum is at the origin, that is, zero in all DIMENSIONS).

  • SHIFTED-SPHERE: (-2.5, -2.5, ..., -2.5) (the optimum is shifted to the 3rd hyper-quadrant).

  • RASTRIGIN: (0, 0, ..., 0) (the optimum is at the origin, that is, zero in all DIMENSIONS).

  • SHIFTED-RASTRIGIN: (1.123, 1.123, ..., 1.123) (the optimum is shifted to the 1st hyper-quadrant).

  • BIPOLAR-RASTRIGIN: (1, -1, 1,..., -1) (the optimum alternates between 1 and -1 in each dimension)

  • ROSENBROCK: (1, 1, ..., 1) (the optimum is in the far right corner of the unit hypersquare).

  • HIMMELBLAU: Any of *(3.0, 2.0), (-2.805118, 3.131312),
    (-3.779310, -3.283186), or *(3.584428 -1.848126).

  • EGGHOLDER: (512.0 404.2319).


The usual representation of the display area of Netlogo simulations as a discrete 2D grid of cells (patches) where agents move around, is not suitable for our purposes, since our model deals with continuous values in higher dimensions (d > = 2). Therefore, the display area of our model was designed as an informative visual depiction of the algorithm's dynamics.

Instead of showing movements, the display visualises a series of color heatmaps that represent the coordinate vectors and cost function values of some of the agents in the population. The OPTIMUM solution is represented in the top row of the display area, as a guide to compare how similar are the candidate solutions discovered by the algorithm. The next two rows correspond to the current BEST-EVER and TOP-LEADER solutions, while the remaining bottom rows show the solutions from the LEADERs set (each row is labeled with its corresponding name on top of it).

The color intensity of each patch in the heatmap is scaled to a range defined by the search space boundary constraints. Therefore, they range from black for the lower limit, white for the upper limit, and graduated shades of orange for the mid-range values (for example, if the BEST is at the coordinates with the value 0, it will be displayed as pure orange). As a simple test to illustrate the use of these conventions, we suggest performing the following set of actions and comparing the differences (make sure to set multiple N-LEADERS > 2 and that the VIEW-UPDATEs checkbox is on):

DIMENSIONS: 30 -> PROBLEM: rastrigin -> SETUP -> PROBLEM: bipolar-rastrigin -> SETUP.

Notice that the patch column at the right edge of the display area shows the value of the cost function or quality associated to the corresponding solution; these patches are scaled to a range between 100 (black) and the OPTIMUM value (white). Besides, notice that the display area would be automatically resized when the DIMENSIONS chooser is changed.


The behavior of the algorithm is determined by the application of its four search components: RESETS?, FOLLOW? WALK? and ELITISM; You can try to solve each problem in the testbed suite with different configurations of these components and see their effect in the patterns that emerge on the monitor plots and in the display area. You can repeat these experiments many times (press the buttons SETUP->GO as many times as you like) and verify the success rate for each problem (i.e., what percentage of the total number of iterations was the FLW algorithm able to discover the OPTIMUM?). Also check the efficiency of each mode/configuration, in terms of the number of evaluations needed for a successful run (EVALUATIONS-COUNT) or the time taken to converge (RUNTIME), and how an increase in dimension affects this performance.

Once you have experimented each configuration in each problem, you can further study the differences in single-leader vs. multi-leader modes of operation. Is the multi-leader strategy better than the single-leader strategy? Which achieves a higher success rate? Is such behavior exhibited for the entire testbed suite? Do you get better results as the number of LEADERs increases? You can test the last hypothesis by testing different combinations of pair values (N-LEADERS, POP-SIZE): (1, 5), (2, 10), (4, 20), (8, 40). How does this effectiveness depend on the dimensionality of the problem? Note that an increase in the number of LEADERs or POP-SIZE also consumes MAX-EVALS budget evaluations faster and therefore a compromise may be required for the algorithm to succeed.

Other interesting questions to address are: Can you identify which search operator configuration improves the optimization effectiveness/efficiency of the FLW algorithm? Is there a minimum follower base size for multi-leader mode or a population size for single-leader mode to work properly? What search operator or component has more impact on the behaviour of the algorithm?


Each PROBLEM can exhibit different mathematical properties: multimodality, convexity, separability, multiple global or local minima, plateaus, ridges, valleys, etc. The advantage of swarm optimization methods is their ability to explore multiple regions of the search space simultaneously to avoid getting stuck in local minima due to such properties, unlike single-point metaheuristics. However, it is also known that these swarm-based methods can suffer from premature convergence where the entire population soon collapses into a single suboptimal solution or small region.

The FLW algorithm attempts to circumvent this drawback by means of two techniques. First, it allows for multiple leaders, which means you can choose between having a single population of agents with a single leader, or having multiple leaders with different subgroups of followers. By choosing multi-leader mode (controlled by the N-LEADERS slider), leaders spread out in separate regions of the search space looking for better solutions, and their followers likewise. You can verify this emergent behavior by experimenting with different values for the N-LEADERS and POP-SIZE sliders and by inspecting the different curves in the VALUE-OF-LEADERS plot and the simulation display area.

The second technique is intended to prevent premature convergence by dispersing agents with a warm reset of their locations every time MEAN-DIVERSITY drops below a certain threshold, or periodically once agents have aged a predetermined number of iterations (you can also activate the latter operation by pressing the RESET button at any time during the simulation).

Since relocated leaders can start exploring newer unknown regions, a learning mechanism allows the algorithm to keep track of the most promising solution found so far in previous resets (this is reminiscent of ELITISM in Genetic Algorithms). FLW-HD achieves this by forcing a subset of agents to move to the location of the BEST-SOLUTION-EVER; the choice of the subset ("the elite") is defined by the ELITISM chooser, among one of the following options: ANY-WALKER (an arbitrary agent from the WALKERs set moves to the best solution location), WALKERs (the entire set of WALKERs is moved to that location), EVERYBODY (the entire set of WALKERs and LEADERs, and thus FOLLOWERs, is relocated to that position), or simply NONE (no elitism is applied).

The ELITISM strategy can impact the emergent behavior of the algorithm in different ways, and such impact will depend on the properties of the problem, the dimensionality of the search space and the configuration of the search operators used during its execution. We encourage the user to experiment with these different strategies to see how the success of reaching the OPTIMUM is affected, as well as the efficiency of the algorithm, in terms of time taken to converge (RUNTIME) or number of function evaluations required (EVALUATIONS-COUNT).


Some possible ideas for model extensions are:

  • Expand the suite of testbed problems with additional cost functions (including combinations of existing ones).

  • Extend the model to solve non-stationary problems, that is, problems where the cost function or its parameters can vary over time during the course of a run. In this sense, we believe that the mechanisms to avoid premature convergence implemented in the FLW-HD algorithm can be useful to adapt its search behavior to the dynamic changes in the optimization space.

  • Implement an adaptive strategy of combining/applying search operators to allow the algorithm to take further advantage of the discovered properties of the problem during its execution, so as to perform more efficiently (just like the adaptive strategy currently implemented to sample the deviation of the WALKERs steps), that is, converging to the OPTIMUM solution in fewer cost function evaluations.

  • Finally, the adaptation of the algorithm to handle non-continuous problem domains (where the variables in each dimension take binary, discrete or combinatorial values) is also an interesting path for future development.


Modeling Commons -> Follow the Leader + Random Walk (FLW) Swarm Algorithm, see [1].




Sergio Rojas-Galeano

Copyright (c) December 2022

email: srojas@udistrital.edu.co

Version 1.12



[1] Garzon, M., Alvarez-Pomar and Rojas-Galeano, S. (2023). An Agent-based Model of Follow-the-leader Search using Multiple Leaders. To appear in: Proceedings of the 14th Metaheuristic International Conference (MIC 2022).

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

;; --------------------------------------------------------------------------
;; A multi-leader multi-agent model for bound-constrained numerical optimization
;; based on Follow-the-Leaders and random Walk search in High Dimensions (FLW-HD).
;; This agent-based model aims to find the global minimum solution
;; of a continuous-valued cost function for a given optimisation problem.
;; A model by Sergio Rojas-Galeano
;; v1.12, December 2022 Copyright (c) The author
;; email: srojas@udistrital.edu.co
;; Universidad Distrital Francisco Jose de Caldas, Bogota, Colombia
;; This program is free software: you can redistribute it and/or modify
;; it under the terms of the GNU General Public License (GPLv3)
;; (see license at: https://www.gnu.org/licenses/gpl-3.0.txt)
;; The model is made publicly available in the hope that it will be useful
;; to modelers, but WITHOUT ANY WARRANTY whatsoever (see license for details).
;; --------------------------------------------------------------------------

globals [
  ;; FLW algorithm globals
  top-leader          ; best agent in current iteration
  top-leader-value    ; cost function value of current top leader
  best-ever           ; list of coordinates of best solution ever found
  best-ever-value     ; cost function value of best solution ever found
  flw-diversity       ; total diversity of agent population
  flw-runtime         ; runtime (ms) until optimum or maximum ticks is reached (ms)
  flw-success         ; indicator of optimum value reached (to a certain tolerance)
  flw-evals           ; number of cost function evaluations so far

  ;; Problem globals
  true-best           ; ground truth optimum solution for a given problem
  true-best-value     ; cost function value of optimum solution
  xi-bounds           ; (homogeneous) bound constraint for all xi (-B <= xi <= B)
  eps                 ; error precision of solution approximation w.r.t optimum
  feps                ; error precision of solution function value w.r.t optimum
  step-sigmas         ; list of standard deviation choices for walker steps
  sigmas-hist         ; histogram effectiveness of walker step deviations

;; FLW breeds
breed [leaders leader]
breed [followers follower]
breed [walkers walker]

;; Agent variables
turtles-own [
  my-leader           ; leader that this agent is following
  xcors               ; d-dimensional list of coordinates (agent location or candidate solution)
  value               ; value of cost function evaluated at agent location

leaders-own [
  diversity           ; diversity of coordinates between leaders and their followers

walkers-own [
  step-sigma          ; std. deviation of random step width for walker move

;; Initialise settings for problem and swarm of agents

to setup
  let n-walkers pop-size * walkers-rate

  ;; create leader agents and their followers
  let colors shuffle [ 25 45 65 75 95 115 125 135 ]       ; colors to distinguish leaders in view area
  create-leaders n-leaders [
    set color first colors set colors remove color colors ; assing next color
    set my-leader self

    ;; create an equal number of followers/leader (NB. round-off may yield fewer agents than pop-size)
    hatch-followers (pop-size - n-walkers - n-leaders) / n-leaders

  ;; create walker agents
  create-walkers n-walkers

  ;; randomly locate all agents

  ;; initialise FLW globals
  set step-sigmas  n-values 6 [ i -> 10 ^ (1 - i)]     ; list of walker step deviations: [10 1...1E-5]
  set sigmas-hist n-values length step-sigmas [ .01 ]  ; histogram of step deviations effectiveness
  set flw-evals 0 set flw-success false
  set max-evals dimensions * 10000                     ; 10^4 per dim., as defined in CEC2013 competition

;; Assign initial locations (coordinates) to agents according to location-sampling chooser

to init-locations
  ;; either use random uniform locations
  if (location-sampling = "uniform") [
    ask turtles [ set xcors n-values dimensions [ (random-float 2 * xi-bounds) - xi-bounds ] ]

  ;; or else compute initial locations using Latin Hypercube Sampling (LHS)
  if (location-sampling = "latin hypercube") [
    ask turtles [ set xcors n-values dimensions [ 0 ] ]

    ;; recall that n-agents maybe fewer than pop-size due to rounding-off of followers/leader at setup
    let n-agents count turtles
    let width 2 * xi-bounds / n-agents   ; the width to split range of each dimension in equal slots

    ;; split each dimension in non-overlapping slots (1 per agent) and sample random locations within
    foreach range dimensions [ index ->
      let locations shuffle n-values n-agents [ slot -> width * (slot + random-float 1) ]

      ;; assign locations to xcor of each agent in orderly manner, ensuring bound constraint
      (foreach sort turtles locations [
        [agent location] -> ask agent [ set xcors replace-item index xcors (- xi-bounds + location) ]

  ;; now compute cost function value of each agent at corresponding initial location
  ask turtles [
    set value compute-value xcors
    ht ; turtle visualisation not needed in this model

;; Execute one iteration of FLW algorithm

to go

  ;; apply search operators
  if elitism != "none"  [ do-elitism ]
  if follow?   [ follow-leaders ]
  if walk?     [ random-walks ]
  if resets? [ warm-reset ]

  ;; update FLW globals
  update-display   ; visualisation is not accounted for runtime stats

  ;; termination condition: max ticks, or optimum reached (s.t. precision tolerance)
  if (flw-evals > max-evals) or (flw-success)  [ stop ]

;; Move the elite towards best solution ever

to do-elitism
  let elite (ifelse-value
    (elitism = "any walker") [ walker min [who] of walkers ]
    (elitism = "walkers")    [ walkers ]
    (elitism = "everybody")  [ (turtle-set walkers leaders) ]
  ask elite [
    set xcors best-ever
    set value best-ever-value

;; Explotiation operator: follow-the-leaders

to follow-leaders
  ;; move follower agents
  ask followers [
    set value compute-value xcors

;; Move a follower towards its leader

to follow-move
  foreach range dimensions [
    index -> let xi item index xcors

    ;; apply follow rule in each dimension
    let delta-xi (item index [xcors] of my-leader) - xi
    let xi-new (xi + (random-float 3) * delta-xi)

    ;; update current coordinate value enforcing bound constraint
    set xcors replace-item index xcors max (list min (list xi-new xi-bounds) (- xi-bounds))

;; Exploration operator: random walks

to random-walks
  ;; move walker agents
  ask walkers [
    set value compute-value xcors

    ;; if walker is better, drag the top-leader here
    if value < top-leader-value [
      ask top-leader [ set xcors [xcors] of myself set value [value] of myself ]

    ;; if new best ever solution discovered, update histogram of step deviation effectiveness
    if value < best-ever-value [
      let index (position step-sigma step-sigmas)
      let new-count (item index sigmas-hist + max list (best-ever-value - value) .01 )
;      let new-count (item index sigmas-hist + .01 )
      set sigmas-hist replace-item index sigmas-hist new-count

;; Walker perturbation

to walk-move
  ;; choose a random coordinate to be modified
  let index random dimensions
  let xi item index xcors

  ;; choose a random step width according to step-sampling strategy
  let step get-step

  ;; update chosen coordinate value enforcing bound constraint
  set xcors replace-item index xcors max (list min (list (xi + step) xi-bounds) (- xi-bounds))

to-report get-step
  ;; either choose any step deviation (sigma) uniformly
  if (step-sampling = "uniform") [
    set step-sigma one-of step-sigmas

  ;; or else choose a step deviation proportional to the effectiveness histogram
  if (step-sampling = "adaptive") [
    ;; compute cumulative distribution from histogram of sigma improvement counts
    ;; (notice first value is formatted as sublist as it is required as such by subsequent reduce operator)
    let hist fput (list (first sigmas-hist)) (but-first sigmas-hist)
    let aggs reduce [ [cumul next] -> sentence cumul ((last cumul) + next) ] hist

    ;; use roulette wheel to choose sigma value according to effectiveness histogram
    let pockets map [ p -> p / last aggs] aggs  ; compute wheel pockets by normalising cumulative sum
    let ball random-float 1                     ; roll the ball, then check the winner pocket
    let winner first filter [ [index] -> ball <= item index pockets ] range (length pockets)
    set step-sigma item winner step-sigmas      ; remember the chosen step deviation for this walker
  report random-normal 0 step-sigma             ; sample a normal random step with chosen sigma deviation

;; Compute diversity of each group with respect to their leaders and average diversity of groups

to check-diversity
  ask leaders [
    set diversity 0
    foreach sort followers with [ my-leader = myself ] [
      fi -> set diversity diversity + l2-distance xcors [xcors] of fi
  set flw-diversity mean [diversity] of leaders

;; Periodically reset agents locations randomly to prevent premature convergence

to warm-reset
  if flw-evals mod (max-evals / 10) < pop-size or flw-diversity < 1E-8 [

;; Restart agents locations and cleanup histogram of deviation effectiveness

to reset
  init-locations                                        ; reset agent location coordinates
  set sigmas-hist n-values length step-sigmas [ .01 ]   ; assign initial counts <> 0
  setup-histogram                                       ; reset histogram of step deviations

;; Allow best followers to claim leadership

to check-leadership
  ask leaders [
    let my-best-follower min-one-of turtles with [ my-leader = myself ] [value]

    ;; swap places with best follower if better than leader
    if [value] of my-best-follower < value [
      let my-old-xcors xcors
      let my-old-value value
      set xcors [xcors] of my-best-follower set value [value] of my-best-follower
      ask my-best-follower [ set xcors my-old-xcors set value my-old-value ]

;; Update best solution discovered so far

to update-best
  ask min-one-of leaders [value] [
    set top-leader self
    set top-leader-value value

    ;; best solution ever must be a better top-leader
    if top-leader-value < best-ever-value [
      set best-ever [xcors] of top-leader
      set best-ever-value top-leader-value

      ;; check if optimum found (termination condition)
      set eps ifelse-value problem = "himmelblau" [
        ;; for himmelblau, check if best-ever is close enough to any of multiple minima
        min map [ i -> l2-distance best-ever (list item i true-best item (i + 1) true-best)  ] (range 0 length true-best 2)
        ;; for the other problems there's a single minimum
        l2-distance best-ever true-best
      set feps abs (true-best-value - best-ever-value)
      if feps < 1E-8 [ set flw-success true ]   ; 1E-8 tolerance value, as defined in CEC2013 competition

;; Update actual cumulative algorithm runtime

to update-runtime
  set flw-runtime flw-runtime + timer

;; Prepare the display area to show some agents coordinates and values

to setup-display
  resize-world 0 (dimensions + 1) 0 2 * (3 + n-leaders)
  set-patch-size ( ifelse-value dimensions <= 2 [ 34 ] dimensions <= 20 [ 18 ] dimensions <= 50 [ 14 ] [ 8 ] )

;; Update the display area showing heatmaps of some agents coordinates and values

to update-display
  let nrows 2 * (3 + n-leaders)

  ;; Build lists to display coordinates, values and labels of leaders
  let lxcors [] let lvalues [] let llabels []
  foreach sort-on [(- who)] leaders [ li ->
    set lxcors insert-item 0 lxcors [xcors] of li
    set lvalues insert-item 0 lvalues [value] of li
    set llabels insert-item 0 llabels word "Leader " ([who] of li + 1)

  ;; Extend lists to display optimum, best solution ever and top leader's coordinates, labels and values
  foreach (list [xcors] of top-leader best-ever true-best) [ li -> set lxcors insert-item 0 lxcors li ]
  foreach (list [value] of top-leader best-ever-value true-best-value) [ li -> set lvalues insert-item 0 lvalues li ]
  foreach (list  word "Top leader:#" ([who] of top-leader + 1) "Best ever" "Optimum") [ li -> set llabels insert-item 0 llabels li ]

  ;; Traverse lists and display data on display area
  foreach (range 0 (length lxcors)) [ row ->
    ask patch (dimensions / 2) (nrows - 2 * row) [ set plabel item row llabels ]
    ask patch (dimensions + 1) (nrows - 2 * row) [ set plabel "f(x)" ]
    ask patches with [ pycor = nrows - (2 * row + 1) and pxcor < dimensions] [
      set pcolor scale-color orange (item pxcor (item row lxcors))  (- xi-bounds) xi-bounds
    ask patches with [ pycor = nrows - (2 * row + 1) and pxcor = dimensions + 1] [
      set pcolor scale-color orange (item row lvalues) 100 true-best-value

;; Setup pens for bars in histogram of step deviations effectiveness

to setup-histogram
  set-current-plot "sigma effectiveness"
  foreach range length sigmas-hist [
    i ->
    create-temporary-plot-pen word "pen" i
    set-plot-pen-mode 1            ; show as bars
    set-plot-pen-color i * 10 + 4  ; assign bar colors
  set-plot-x-range 0 length sigmas-hist

;; Reporter of a string comprising a list of leaders' values

to-report get-leaders-values
  report (ifelse-value
    n-leaders = 1 [(list "L1:" [precision value 4] of leader 0 )]
    n-leaders = 2 [(list "L1:" [precision value 4] of leader 0 "|| L2:" [precision value 4] of leader 1 )]
    n-leaders = 3 [(list "L1:" [precision value 4] of leader 0 "|| L2:" [precision value 4] of leader 1
                      "|| L3:" [precision value 4] of leader 2 )]
    n-leaders = 4 [(list "L1:" [precision value 4] of leader 0 "|| L2:" [precision value 4] of leader 1
                      "|| L3:" [precision value 4] of leader 2 "|| L4:" [precision value 4] of leader 3 )]
    [ sort [precision value 4] of leaders ]

;; Ancilliary functions to compute vector distances

to-report l1-distance [xvec zvec]
  let dist sum (map [ [xi zi] -> abs (xi - zi) ] xvec zvec)
  report dist

to-report l2-distance [xvec zvec]
  let dist sqrt sum (map [ [xi zi] -> (xi - zi) ^ 2] xvec zvec)
  report dist

;;;;;------------ Benchmark problems section ------------;;;;;
;; Define ground-truth optimum coordinates and its cost function value

to setup-problem
  set best-ever-value 1E20    ; to ensure any leader becoming best ever in first iteration
  set xi-bounds 5.12          ; default bound-constraint: -xi-bounds <= xi <= xi-bounds
    ;; convex unimodal paraboloid cost function
    problem = "sphere" [ set true-best n-values dimensions [ 0 ]  ]

    ;; convex unimodal shifted paraboloid cost function
    problem = "shifted-sphere" [ set true-best n-values dimensions [ -2.5 ] ]

    ;; multimodal, multiple local minima with ridges cost function
    problem = "rastrigin" [ set true-best n-values dimensions [ 0 ] ]

    ;; multimodal, shifted multiple local minima with ridges cost function
    problem = "shifted-rastrigin" [ set true-best n-values dimensions [ 1.123 ] ]

    ;; multimodal, shifted multiple local minima with ridges cost function and checkered-pattern optimum
    problem = "bipolar-rastrigin" [ set true-best map [ i -> 2 * (i mod 2) - 1 ] range dimensions ]

    ;; non-convex, single global minima with valleys cost function
    problem = "rosenbrock" [ set true-best n-values dimensions [ 1 ]  set xi-bounds 2.048 ]

    ;; multiple global minima with valleys cost function
    problem = "himmelblau" [ set true-best [ 3.0 2.0 -2.805118 3.131312 -3.779310 -3.283186 3.584428 -1.848126 ]  set dimensions 2 ]

    ;; multimodal, multiple local minima with ridges cost function
    problem = "eggholder" [ set true-best [ 512.0 404.2319 ]  set xi-bounds 512  set dimensions 2]

  set true-best-value compute-value true-best    ; compute optimum value

;; Evaluation of cost function of benchmark problems at given xinput coordinates

to-report compute-value [ xinput ]
  let fvalue 0

    problem = "sphere" [
      set fvalue sum (map [ xi -> xi ^ 2 ] xinput)
    problem = "shifted-sphere" [
      set fvalue sum (map [ xi -> (xi + 2.5) ^ 2 ] xinput)
    problem = "rastrigin" [
      set fvalue 10 * dimensions + sum ( map [ xi -> (xi ^ 2) - 10 * cos ( (180 / pi) * (2 * pi) * xi )  ] xinput )
    problem = "shifted-rastrigin" [
      set fvalue 10 * dimensions + sum ( map [ xi -> ((xi - 1.123) ^ 2) - 10 * cos ( (180 / pi) * (2 * pi) * (xi - 1.123) )  ] xinput )
    problem = "bipolar-rastrigin" [
      set fvalue 10 * dimensions + sum ( map [ [xi i] -> ((xi - (2 * (i mod 2) - 1)) ^ 2) - 10 * cos ( (180 / pi) * (2 * pi) * ((xi - (2 * (i mod 2) - 1)) ^ 2) )  ] xinput range dimensions)
    problem = "rosenbrock" [
      set fvalue sum (map [ i -> 100 * ( (item i xinput) ^ 2 - (item (i + 1) xinput) ) ^ 2 + (1 - (item i xinput)) ^ 2 ] (range 0 dimensions 2))
    problem = "himmelblau" [
      let x1 item 0 xinput
      let x2 item 1 xinput
      set fvalue ((x1 ^ 2) + x2 - 11) ^ 2 + (x1 + (x2 ^ 2) - 7)^ 2
    problem = "eggholder" [
      let x1 item 0 xinput
      let x2 item 1 xinput
      set fvalue ( (- x1) * sin ( (180 / pi) * sqrt (abs (x1 - (x2 + 47))))) - (x2 + 47) * sin ( (180 / pi) * sqrt (abs ((x1 / 2) + (x2 + 47))) )
  set flw-evals flw-evals + 1    ; update count of number of evaluations of cost function
  report fvalue

;;;;;; END OF MODEL ;;;;;;

There are 12 versions of this model.

Uploaded by When Description Download
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago u Download this version
Sergio Rojas-Galeano about 1 month ago updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Updated Info tab Download this version
Sergio Rojas-Galeano about 1 month ago Initial upload Download this version

Attached files

File Type Description Last updated
FLW-HD: Optimisation based on Follow-the-Leader and random Walk in High Dimensions.png preview Preview for 'FLW-HD: Optimisation based on Follow-the-Leader and random Walk in High Dimensions' about 1 month ago, by Sergio Rojas-Galeano Download

This model does not have any ancestors.

This model does not have any descendants.