Corruption with Repeated Interactions
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
While different forms of corruption exists, many of them can be abstracted as a corrupt interaction between at least two individuals or groups. This model builds upon the work of Hammond (2000). Our model extension aims to explore how repeated interactions between agents affect the spreading of the “fear of enforcement” in an artificial society.
HOW IT WORKS
Every agent has an intrinsic morality value (a randomly generated float between 0 and 1). Each has a memory of past interactions where the strategy (corrupt or honest) of its partner is stored. Finally, every agent has a social network, a list of "friend" agents whose actions and status they monitor. While many of the values are fixed throughout the simulation run, every agent perceives the risks and rewards for corruption differently. This is involves an analysis of their own memory and their social network.
The model can be summarized as follows: 1. Select agent: every round, an agent will be randomly paired with another agent. 2. Select strategy: each agent decides simultaneously to act corrupt or honestly. The decision-rule is based on the agent’s bounded rationality. 3. Receive payoff: Acting corrupt yields the highest payoff (x), but only if the other agent also chooses the corrupt strategy. If both agents chose to act honest, they both receive the lowest payoff (y). If only one of two agents acts corrupt, the non-corrupt agent reports the corrupt agent. 4. Suspend agents: If an agent is reported a predefined number of times, that agent will be suspended for a period of time. A suspended agent cannot interact with other agents or gain payoffs. 5. Release agents: After serving the suspension time, the agent is allowed to interact with the others again.
In case of repeated interactions, the agent will keep its partner for a number of rounds.
HOW TO USE IT
- A seed can be specified to reproduce a specific model run
- The 'setup' button will set up the model using the speciefied parameters. This will generate the turtles, their networks, and calculate their initial decision.
- The small 'go' button will perform one turn (see the steps mentioned in 'How it works'. The decisions for the next turn will then be calculated again based on the previous outcome.
- The large 'go' (with the forever-symbol) button will repeatedly take turns until it is clicked again.
- The 'size-of-memory' slider sets how many previous encounters an agent will remember.
- The 'size-of-network' slider sets how many agents are included in the network.
- The 'corruption-base-payoff' and 'honesty-base-payoff' sliders set the base payoffs for the corrupt-strategy and honest-strategy.
- The 'suspended-term' slider sets how long an agent will be suspended for.
- The 'reports-for-suspension' slider sets how many reports is required before an agent is suspended.
- The 'number-of-interactions' slider sets how many rounds an agent should interact with its current partner.
- The higher the value of the 'last-action-weight' slider, the larger the influence of the last interaction on the agent’s estimation of encountering a corrupt agent. When this value is set to 100, the agent will only include the last interaction for the estimation.
- The 'number-of-agents' slider will select how large the agent population is.
- The 'Mean Morality' reported show the average morality value of the agent population.
- The graph show the total amount of agents that are corrupt (yellow), honest (blue), or suspended (red) at the current moment.
THINGS TO NOTICE
With low corruption payoffs and/or high honest payoffs, the population remains honest the vast majority of the time. Increasing the corruption payoffs (or lowering the honest payoffs) results in permanent corrupt society. However, with some settings, the population starts out corrupt, but 'tips' at a random point where a large number of agents get suspended at once, causing a transition to a low-corrupt society. Once the society reaches this state, it does not revert back.
THINGS TO TRY
While the model is based on Hammond's work, the current model is unable to exactly reproduce Hammond's findings. Hammond described that every agent in the model acts honestly after the transition. In our current model, we still see corrupt agents in the low-corrupt society.
EXTENDING THE MODEL
An interesting avenue for further research might be to explore how different network configuration influence the transition from a high-corrupt societey to a low-corrupt one. Other possible extenions are to include perceived punishment (rather than all agents knowing the objective punishment), or a different reporting system.
NETLOGO FEATURES
Built by using NetLogo 6.0.
RELATED MODELS
--
CREDITS AND REFERENCES
We would like to thank all the participants of the workshop 'Agent-based Modelling for Criminological Theory Testing' for their support and help in building the model. A special thank you to Jennifer Badham, Álvaro Martins Espíndola, and Wouter Steenbeek for their criticial eye on the code.
References: Hammond, R. (2000). Endogenous transition dynamics in corruption: An agent-based computer model: Center on Social and Economic Dynamics. (http://www.elautomataeconomico.com.ar/download/papers/Corrupcion-Hammond.pdf)
Lonsdale, C. (2017). Creating an agent-based model of hammond's model of social inequality using netlogo. http://charleslonsdale.co.uk/portfolio/advanced-2.php. Accessed on 9 January 2019.
Comments and Questions
globals [ choices ;; Different strategies for agents. 1 = Corrupt, and 0 = Honest ] ;; Agent properties turtles-own [ morality ;; The agent's predisposition to act good (0 = complete immorale, 1 = total good and rightness ) network ;; Agentset of 'friends' memory ;; List of interactions with previously encountered agents suspended-time ;; Amount of time an agent is suspended for if caught for a corrupt act corrupt-previously? ;; If true, the agent was corrupt in the last round reports ;; Total number of reports received since last suspension chosen? ;; If true, the agent has been matched with a partner partner ;; Display's the agent's partner my-interactions ;; Shows the number of interactions with the current partner ] ;;------------------------------------------------------------------------- ;; MODEL SETUP AND GO PROCEDURE ;;------------------------------------------------------------------------- ;; Prepare model setup to setup clear-all random-seed seed reset-ticks resize-world 0 ((number-of-agents / 8) - 1) 0 8 ;; Resize world so it is large enough to accommodate every agent on its own patch setup-globals ;; Create the global variable containing the actions the agents can choose from generate-population setup-networks calculate-decision ;; All agents need to decide if they will act corrupt or honest before the first round repeat reports-for-suspension [go] ;; A sort of burn-in that allows some agents to be potentially suspended ask links [die] ;; reset the links, agents' partners and interactions ask turtles [ set chosen? false set partner nobody set my-interactions 0 ] reset-ticks random-seed new-seed ;; Making sure that the decisions aren't always exactly the same end ;; Procedure for a simulation round to go generate-links ;; agents will be randomly matched with another agent (partner) compare-actions ;; agents compare actions with their partner enforce ;; check if agents get suspended or can be returned to the game calculate-decision ;; agents decide whether they choose to act corrupt or honest tick end ;;------------------------------------------------------------------------- ;; AGENT DECISION-MAKING ;;------------------------------------------------------------------------- ;; Calculate the decision of each agent in turn to calculate-decision ask turtles with [my-interactions > 0] [ ;; selecting agents who have interacted previously if my-interactions = number-of-interactions [ ;; if an agent has reached the specified number of interactions ('number-of-interactions') set my-interactions 0 ;; my-interactions will be reset to zero set chosen? FALSE ;; the agent can be chosen by another agent as their partner set partner nobody ;; the agent has no partner ask my-links [die] ;; remove the link between the agent and its partner ] ] ;; IF agent is suspended, set its color to red and skip rest of its turn ask turtles [ ifelse suspended-time != 0 [ set color red ] ;; ELSE agents who are not suspended calculate payoffs [ ;; Calculate the weighted corruption payoff (xi) based on the agent's morality let xi ( 1 - morality ) * corruption-base-payoff ;; Define variable 'A' for later calculations let A 0 ;; Check if agent already has interacted previously with the same partner OR if no weights are assigned to memory ) ifelse ( my-interactions = 0 ) or ( last-action-weight = 0 ) ;; Find the number of corrupt agents in memory [ let memory-corrupt sum memory ;; Sets A as probability of encountering a corrupt agent set A memory-corrupt / size-of-memory ] ;; ELSE - procedure for assigning weights [ let last-action first memory ;; 'first memory' selects the most recent value of memory let weighted-action last-action * (last-action-weight / 100) ;; assign weight to it let other-actions but-first memory ;; select the remaining values in memory let other-corrupt (sum other-actions) / (length other-actions) ;; Find the number of corrupt agents in 'other-actions' set other-corrupt (1 - (last-action-weight / 100)) * other-corrupt ;; assign weight to it ;; Sets A as probability of encountering a corrupt agent based on repeated interactions set A weighted-action + other-corrupt ] ;; Define the number of agents in network that are suspended/corrupt let friends-suspended 0 let friends-corrupt 0 ;; Scan through network updating suspended/corrupt values ask network [ if suspended-time != 0 [ set friends-suspended friends-suspended + 1 ] if corrupt-previously? = true [ set friends-corrupt friends-corrupt + 1 ] ] ;; Define variable 'B' for percieved chance of being suspended let B 0 ;; Sets probability, avoiding potential divide-by-zero errors if no friends are corrupt if friends-corrupt != 0 [ set B friends-suspended / friends-corrupt ] ;; Calculate the agents corruption payoff for the round based on A, B, x, y and k (following Hammond 2000). let corruption-payoff (1 - B) * ((A * xi) + (1 - A) * honesty-base-payoff) + B * (honesty-base-payoff - suspended-term * honesty-base-payoff) ;; Determine whether the agent will be corrupt or honest in the next round and set values accordingly ifelse honesty-base-payoff > corruption-payoff ;; IF honesty-base-payoff > corruption-payoff [ set color blue ] ;; ELSE honesty-base-payoff =< corruption-payoff [ set color yellow ] ] ] end ;;------------------------------------------------------------------------- ;; CREATE LINKS PROCEDURE ;;------------------------------------------------------------------------- ;; Create links between an agent and its partner to symbolize working together to generate-links ;; Selecting only agents that are not suspended and have no partner yet ask turtles with [suspended-time = 0 and chosen? = FALSE] [ if partner = nobody [ ;; Get a random partner from the list of other agents who are not suspended and have no partner set partner one-of other turtles with [chosen? = FALSE and suspended-time = 0] if partner != nobody [ set chosen? TRUE create-link-with partner ;; Make sure that the partner cannot be chosen by another agent ask partner [ set partner myself set chosen? TRUE ] ] ] ] end ;;------------------------------------------------------------------------- ;; COMPARE ACTIONS PROCEDURE ;;------------------------------------------------------------------------- ;; Using the links created, compare actions for each agent-partner pair to compare-actions ;; Go through each link between agent and its partner individually. End1 is always the agent. End2 is the partner. ask links [ ;; Variables to hold the strategy of each agent let agent-decision "null" let partner-decision "null" ;; Get the decisions of the agent and partner based on their colours ask end1[ ;; end1 is the agent ifelse color = blue [ set agent-decision 0 ] [ set agent-decision 1 ] ] ask end2 [ ;; end2 is the partner ifelse color = blue [ set partner-decision 0 ] [ set partner-decision 1 ] ] ;; Find mismatches e.g. either the agent or the partner chose the corrupt action ifelse agent-decision != partner-decision [ ifelse agent-decision = 1 [ ;; IF the agent (end1) was corrupt ask end1 [ if random-float 1 < report-prop [ set reports reports + 1 ] ;; agent gets be reported set corrupt-previously? true ;; and updates the corrupt-previously? to TRUE ] ask end2 [ ;; the partner updates the corrupt-previously? to FALSE set corrupt-previously? false ] ] [ ;; IF the partner (end2) was corrupt ask end2 [ if random-float 1 < report-prop [ set reports reports + 1 ] ;; partner gets reported set corrupt-previously? true ;; the partner updates the corrupt-previously to TRUE ] ask end1 [ set corrupt-previously? false ;; the agent updates the corrupt-previously? to FALSE ] ] ] ;;IF agent and partner both chose the corrupt action [ ifelse agent-decision = 1 and partner-decision = 1 [ ask both-ends [ set corrupt-previously? true ] ] ;;ELSE agent and partner both chose the honest action [ ask both-ends [ set corrupt-previously? false ] ] ] ;; Update the memory for each agent to remove the oldest and add the most recent decision of the other agent ;; update memory of the agent ask end1[ set memory but-last memory set memory fput partner-decision memory set my-interactions my-interactions + 1 ] ;; update memory of partner ask end2[ set memory but-last memory set memory fput agent-decision memory set my-interactions my-interactions + 1 ] ] end ;;------------------------------------------------------------------------- ;; ENFORCEMENT PROCEDURE ;;------------------------------------------------------------------------- ;; Procedure for suspending agents for corrupt actions and later return them to the game to enforce ask turtles[ ;; Decrease remaining suspended time for all suspended agents if suspended-time > 0[ set suspended-time suspended-time - 1 ;; if agent has served the suspension term, it will not count as a corrupt agent (change 'corrupt-previously?' to FALSE) if suspended-time = 0 [ set corrupt-previously? FALSE ] ] ;; Suspend any agents that have exceeded the report threshold if reports >= reports-for-suspension [ set suspended-time suspended-term ;; Agents that get suspended lose their current partner ask partner [ ask my-links [die] set chosen? FALSE set partner nobody set my-interactions 0 ] ask my-links [die] set chosen? FALSE set partner nobody set my-interactions 0 set reports 0 ] ] end ;;------------------------------------------------------------------------- ;; CREATING the WORLD and AGENTS ;;------------------------------------------------------------------------- ;; Create the global variable containing the choices for an action to setup-globals set choices [1 0] ;; 1 = Corrupt, and 0 = Honest end ;; Create agent population to generate-population ask n-of number-of-agents patches [ ;; A number (specified by the user through 'number-of-agents') of patches will... sprout 1 [ ;; ...create a single agent with the following settings: set shape "circle" set chosen? FALSE ;; Agents are not yet matched to another agent set network 0 ;; No networks have been established set morality random-float 1 ;; Morality is randomly and uniformly distributed. (0 = complete immorale, 1 = total good and righteousness) set memory n-values size-of-memory [one-of choices] ;; A random memory is created for every agent set suspended-time 0 ;; No agent is suspended at the start set corrupt-previously? false ;; No agent was corrupt in the previous round set reports 0 ;; No one has received any reports set partner nobody ;; Agents do not have a partner yet ] ] end ;; Setup and create the networks for all agents to setup-networks while [ min [ count my-links ] of turtles < size-of-network ] ;; checks if there are still agents with not enough a large enough network size [ ask links [die] ;; If above statement is true, the current network is removed... create-network size-of-network ;; ...and another network will be generated through the 'create-network' procedure ] ask turtles [set network link-neighbors] ;; Once a network is estahblished, the agent's network will be stored in the 'network' variable ask links [die] ;; The links are not needed anymore, because all agents remember their own network end to create-network [DD] ask turtles [ let number-agents-needed DD - count my-links ;; check how many links the agent still needs to reach 'size-of-network' (DD) if number-agents-needed > 0 [ ;; check if the agent needs to have more links let candidates other turtles with [ count my-links < DD ] ;; candidates are other agents with not enough links/friends create-links-with n-of min (list number-agents-needed count candidates) candidates ;; randomly select the needed other agents (candidates) to include in agent's network ] ] end ;;------------------------------------------------------------------------- ;; REPORT FUNCTIONS ;;------------------------------------------------------------------------- to-report x-morality report mean [morality] of turtles end
There is only one version of this model, created over 5 years ago by Nick van Doormaal.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Corruption and Shadow of Future - ABM Workshop presentation.pptx | powerpoint | Presentation given at the second ABM4CTT Workshop (2019) | over 5 years ago, by Nick van Doormaal | Download |
Corruption with Repeated Interactions.png | preview | Preview of Corruption Model in NetLogo | over 5 years ago, by Nick van Doormaal | Download |
Original NetLogo Implementation - Lonsdale 2017.html | html | Link to the original NetLogo Implementation of Hammond's model. Created by Charles Lonsdale in 2017 | over 5 years ago, by Nick van Doormaal | Download |
This model does not have any ancestors.
This model does not have any descendants.