Parrondo on a network
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This model implements a version of the Parrondo's scheme on a network based on various types proposed in the literature. The code is focused on providing the implementation of the elementary game played against the environment. In all cases, only random policy for choosing between the game A and game B is provided.
HOW IT WORKS
The model is based on the two elementary games played by agents located on a 2D lattice. A teach step each agent play one of two games - game A or game B. The first one represent a game played against one of the neighbours. The second one is played against the environment. The selection of game played at each step is random.
Additionally, each agent holds information about its current state and the memory of the previous state. Each agent has also a assigned wealth, describing the accumulated capital or some other type of possession. Wealth, as well as state, can be used to select a game played at each step. They are used in the games implemented in the model. For some cases, the game is selected using the information about state of agents in the neighbourhood.
HOW TO USE IT
The model includes several parameters for controlling the behaviours of agents, selecting the type of game, and controlling the characteristics of games played by agents.
Agent posses information about the wealth, current state, and state in the previous round. The state describes the outcome - win or lose - in the round. Additionally, agents have ability to exchenage position with one of its neighbours.
The model includes parameters controlling the behavior of agents which are independednt on of the (eg. probability of exchanging positions) and the parameters depending on the type of chosen scenario (eg. initial wealth).
Note that in some game not all properies are used. However, each elementary game should update the state of the agenmt and the wealth.
The parameters independent on the chosen scenarios include
- slider swap-prob - controlling the probability of the agent to swap its position with on the agent in the Moore neighbourhood (agent located at a Chebyshev distance of 1)
- slider gameA-prob - probability of playing zero-sum game (i.e. game A)
Properies independetn on the choosen game B are - aWinProb slider - probability of winning in the zero-sum game (i.e. game A) is controlled by - initialWealth input - value of the initial wealth assigned to all aggents. This walue should be upadated by any elementary game, even if it is not used to control the game.
Type of the game B is selected with gameB-type chooser.
Currently, there are three types of game implemented, namely,
- capital-based - original scheme with game B played using the accumulated wealth,
- state-based - game B depends on the previous state (i.e. history),
- niche-based - game B depends on the state of the agents located in the neighborhood.
For the implemented variants of game B, the following controls are available.
For capital-based game
In this case one can control
- bigM - paramater M used to introduce dependency on the accumulated capital via the divisibility condition
- pWinBranch1 and pWinBranch2 - probabilities for winning used in branch 1 and branch 2 of the game B
For state-based game
In the case of state-based (or history-based) game, it is possible to change four probabilities assigned to four combinations of the current and the previous state. They are controlled via sliders pWinState0, pWinState1, pWinState2, and pWinState3. Sliders correspond to states (loose, loose), (loose, win), (win, loose), and (win, win).
For niche-based game
Sliders with probabilities pWinNiche0, pWinNiche1, pWinNiche2, pWinNiche3, and pWinNiche4 for controlling the probability of winning by the agent if, among the agents in its von Neumann neighborhood, the number of agents in the winning state is 0, 1, 2, 3 and 4 respectively. Hence, the subsequent probabilities reflect the chance of wining based on the tendency of winning in the proximity of the agent.
THINGS TO NOTICE
The model provide a monitor for average increase in the wealth, as well as a monitor for one of the agents (agent with id 0). Thus, one can observe the evolution of the average value of the wealth for all agents, as well as the evolution of a single agent.
EXTENDING THE MODEL
The model can be extended by including new type of game B. For this purposed is it necessary to include new possible value of game-type, include new branch in the main switch in the go procedure, and define a procedure implementing rules of the game.
Currentl only random policy for choosing between game A and game B is implemented. The resutling paradoxical behaviour can be also observed using a deterministic policy.
CREDITS AND REFERENCES
Gregory P. Harmer, Derek Abbott, Peter G. Taylor, and Juan M. R. Parrondo, Brownian ratchets and Parrondo’s games, Chaos: An Interdisciplinary Journal of Nonlinear Science 11, 705 (2001); doi: 10.1063/1.1395623
R. Toral, Cooperative Parrondo’s games,Fluct. Noise Lett. Vol. 1 (2001) 7-12; doi: 10.1142/S021947750100007X
Z. Mihailović, M. Rajković, Cooperative Parrondo's games on a two-dimensional lattice, Physica A: Statistical Mechanics and its Applications, Volume 365, Issue 1, pp. 244-251 (2006); doi: 10.1016/j.physa.2006.01.032.
Ye, Ye And Xie, Neng-Gang And Wang, Lin-Gang And Meng, Rui And Cen, Yu-Wan, Study Of Biotic Evolutionary Mechanisms Based On The Multi-Agent Parrondo'S Games, Fluctuation and Noise Letters, Vol. 11, No. 02, 1250012 (2012); doi: 10.1142/S0219477512500125di
Comments and Questions
;; ;; internal variables for agents ;; turtles-own [ state ;; current state, 1 - winning, 0 - loosing state-1 ;; state in the previous round, 1 - winning, 0 - loosing wealth ;; total wealth, set to some initial value during the setup ] ;; ;; setup the world: creat agents, assign states ;; to setup clear-all ask patches [ set pcolor white sprout 1 [ set wealth intial-wealth set shape "person" ifelse random-float 1 < 0.5 [ set state 1 ][ set state 0 ] ifelse random-float 1 < 0.5 [ set state-1 1 ][ set state-1 0 ] ] ] reset-ticks end ;; ;; main process ;; to go ask one-of turtles [ ;; position swapping process if random-float 1.0 > swap-prob [ swap-postions ] ;; include more colors and id show-debug-info ;; ;; main switch based on the type of selected game ;; ifelse random-float 1.0 < gameA-prob [ zero-sum-game ;; game A ][ ( ifelse gameB-type = "capital-based" [ capital-based-game ] gameB-type = "niche-based" [ niche-based-game ] gameB-type = "state-based" [ state-based-game ] gameB-type = "new-type-example" [ new-type-game ] ) ] ] tick end ;; ;; exchange postion with one of neighbours ;; to swap-postions let goal one-of neighbors let here patch-here ;; migrate from the goal ask turtles-on goal [ ;;set pcolor blue move-to here show-debug-info ] ;; move to the goal move-to goal end ;; ;; display some debug information ;; to show-debug-info if debug [ show who set color black set label who ifelse state = "winner" [ set pcolor green ][ set pcolor red ] ] end ;; ;; ;; to warn-unimplemented if who = 0 [ show "[Warning] This function is not implemented yet!" ] end ;; ;; implementation of elementary games used in the schemes ;; ;; ;; game A - zero-sum game played with probability ;; to zero-sum-game ifelse random-float 1.0 < 0.5 [ set state 1 set wealth wealth + 1 ask turtles-on one-of neighbors [ set state 0 set wealth wealth - 1 ] ][ set state 0 set wealth wealth - 1 ask turtles-on one-of neighbors [ set state 1 set wealth wealth + 1 ] ] end ;; ;; standard verion of Parronod's scheme, based on the accumulated wealth ;; to capital-based-game ;; local variable for controlling elementary game let pWin -1 ;; check the condition based on divisibility and set the probability ifelse wealth mod bigM = 0 [ set pWin pWinBranch1 ][ set pWin pWinBranch2 ] ;; play with p assigned to the apropriate branch ifelse random-float 1.0 < pWin [ set state 1 set wealth wealth + 1 ][ set state 0 set wealth wealth - 1 ] end ;; ;; ;; to state-based-game ;; set the probabilities for all cases let probsWinState (list pWinState00 pWinState01 pWinState10 pWinState11 ) ;; save the current state in the history set state-1 state ;; update the state and the wealth ifelse ( random-float 1.0 ) < ( item (2 * state + state-1) probsWinState ) [ set state 1 set wealth wealth + 1 ][ set state 0 set wealth wealth - 1 ] end ;; ;; ;; to niche-based-game ;; set the probabilities for all cases let probsWinNiche (list pWinNiche0 pWinNiche1 pWinNiche2 pWinNiche3 pWinNiche4 ) ;; read the states of agents in the von Neumann neighborhood let whichBranch sum [state] of turtles-on neighbors4 ;; save the current state in the history set state-1 state ;; update the state and the wealth ifelse ( random-float 1.0 ) < ( item whichBranch probsWinNiche ) [ set state 1 set wealth wealth + 1 ][ set state 0 set wealth wealth - 1 ] end ;; ;; example of procedure implementing new game B ;; to new-type-game warn-unimplemented end ;; ;; ;; to-report mean-state report mean [state] of turtles end ;; ;; ;; to-report mean-wealth-change report ( mean [wealth] of turtles ) - intial-wealth end to-report mean-wealth-change-zero report ( [wealth] of turtle 0 ) - intial-wealth end
There is only one version of this model, created over 3 years ago by Jaroslaw Miszczak.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Parrondo on a network.png | preview | Preview for 'Parrondo on a network' | over 3 years ago, by Jaroslaw Miszczak | Download |
This model does not have any ancestors.
This model does not have any descendants.