Game Theory v1
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This model is an N-person customizable Game Theory model, built to demonstrate the power and effects of various decision making strategies. This model looks at three distinct strategies:The first is called "Utility-maximization" and is representative of experiential learning.The next is called "Fair-seeking" and is representative of emotional learning, based on a purely competitive drive. Finally, the last is a predictive strategy that employs both observational and experiential learning. A combination of these groups then iteratively play a pre-set "game" and monitor their expected utilities over time. Whichever group ends up with the highest expected utility has the best decision making strategy for that game.
HOW IT WORKS
The only important rules to consider are those of the learning processes.
For a Utility-maximizer: When they enter a game and make a decision, they look at the payoff they received and the payoff they could have received had they made the other decision. If they received a larger payoff than their potential, then they become more likely to make the same decision again. However, if they could have done better by making the other choice, then they become more likely to make that other decision.
For a Fair-seeker: When they enter a game and make a decision, they look at the payoff they received and the payoff their partner received. If they did as good as or better than their partner, then they become more likely to make the same decision again. If, however, they lost to their partner, then they become more likely to choose their other strategy the next time around.
For a Predictor: They look at the payoff that their partner received and the payoff their partner could have received from choosing the other strategy. The player assumes that his partner will tend to make the better decision, and then assesses what’s best for himself given such a situation. At this point, the player becomes more likely to make his best decision in response to the decision prediction for his partner.
HOW TO USE IT
First, set up the actual "game":
P1S1-utility: Utility player 1 receives from strategy profile (1,1) P2S1-utility: Utility player 2 receives from strategy profile (1,1) P1S1-utility2: Utility player 1 receives from strategy profile (1,2) P2S2-utility: Utility player 2 receives from strategy profile (1,2) P1S2-utility: Utility player 1 receives from strategy profile (2,1) P2S1-utility2: Utility player 2 receives from strategy profile (2,1) P1S2-utility2: Utility player 1 receives from strategy profile (2,2) P2S2-utility2: Utility player 2 receives from strategy profile (2,2)
number-utility-max: number of total players using the utility-maximzation strategy number-seek-fair: number of total players using the fair-seeking strategy number-predictors: number of total players using the predictive strategy
THINGS TO NOTICE
The two monitors display important results for the outcomes of player 1:
The first illustrates how likely a member of a group is to choose strategy 1 over time. This shows which strategy the average member tends towards and how quickyl they tend towards that strategy.
The second illustrates the expected utilities of the average member of a group. When this reaches equilibrium, it is highlighting what the best decision making strategy is for the pre-set game.
THINGS TO TRY
Observe the differences in outcomes of both a cooperation game and a competition game. the setups are as follows:
For a cooperation game: -If both players choose strategy 1, they each receive a payoff of 5 -If both players choose strategy 2, they each receive a payoff of 2 -If the players choose different strategies, they each receive a payoff of 0
For a competition game: -If both players choose strategy 1 (aggressive), they receive a payoff of (R-C)/2 -If one player is aggressive and the other passive, the aggressor receives a payoff of R, while the passive player receives a payoff of 0 -If both players choose to be passive, then they each receive a payoff of R/2 -Be sure to let C > R
EXTENDING THE MODEL
The best extension would be to increase the number of learning procedures and also give more depth to the current ones. Learning is a complex and diverse process and should be modeled accordingly.
NETLOGO FEATURES
Setting utilities is the only case a workaround was required. Given the specificity of assigning payoffs, the coding required that one of the partners take charge in setting all of the utilities. If both partners were allowed to set the utilities, then it is possible that player 1 would receive player 2's payoff and vice versa. To get around this, only player 1 analyzes the current strategy profile. He then assigns his own utilities and potential utilities to himself, and afterwards, assigns those of his partner to his partner. Despite requiring a brute force approach to coding this section, it is effective.
RELATED MODELS
Check out all of the Prisoner's Dilemma models! -> Social Science -> Unverified Models -> Prisoner's Dilemma
CREDITS AND REFERENCES
-N-person iterated prisoner's dilemma model
-Besanko, David, and Ronald Braeutigam. 4th ed. N.p.: Wiley and Sons, 2011. Print.
Comments and Questions
turtles-own [ prob-s1 ;probability of choosing strategy 1 prob-s2 ;probability of choosing strategy 2 strategy1? ;boolean, "true" if strategy 1 is chosen utility ;payoff received at that round potential-utility ;payoff of choosing other strategy in the same round partnered? ;boolean, "true" if turtle is partnered with another turtle partner ;the opposing turtle in a game partner-strategy1? ;boolean, "true" if partner chose strategy 1 strategy ;the learning strategy used by each turtle ] to setup clear-all crt number-utility-max [ set strategy "utility-max" set label 1 ;labels used to identify what strategy a turtle is using ] crt number-seek-fair [ set strategy "seek-fair" set label 2 ] crt number-predictors [ set strategy "predict" set label 3 ] ask turtles [ set shape "person" set size 2 ;size used to make turtles easier to see set color one-of [red blue] ;determines if turtles are player 1s (blue) or player 2s (red) set partnered? false ;turtles are initially unpaired set partner nobody prob-initial ;sets turtles indifferent to their strategies setxy random-xcor random-ycor ] reset-ticks end ;turtle procedure ;sets initial probabilities to prob-initial set prob-s1 0.5 set prob-s2 0.5 end to go end -round ;releases partners to allow them to look for new ones ask turtles [ partner-up ;turtles partner-up if they are next to an opposing player ] let partnered-turtles turtles with [partnered? = true] ask partnered-turtles [play-a-round] ask turtles [ if prob-s1 > 1 [ ;probabilites can't exceed 1 or go below 0 set prob-s1 1 set prob-s2 0 ] if prob-s1 < 0 [ set prob-s1 0 set prob-s2 1 ] ] tick end ;releases partners to end-round let partnered-turtles turtles with [ partnered? = true] ask partnered-turtles [ release-partner ] end ;partnered-turtle procedure to release-partner set partnered? false set partner nobody rt 180 end ;turtle procedure ;opposing turtles pair up if they're spatially located right next to each other to partner-up if (not partnered?) [ rt (random-float 90 - random-float 90) fd 1 set partner one-of (turtles-at -1 0) with [ (not partnered?) and (color != [color] of myself) ] if partner != nobody [ set partnered? true set heading 270 ask partner [ set partnered? true set partner myself set heading 90 ] ] ] end ;partnered-turtle procedure to play-a-round let partnered-turtles turtles with [partnered? = true] pick-a-strategy ;turtles probabilistically make a decision set partner-strategy1? [strategy1?] of partner ask partnered-turtles with [color = blue] [ set-utilities ] ask partnered-turtles with [strategy = "utility-max"] [ utility-max ] ask partnered-turtles with [strategy = "seek-fair"] [ seek-fair ] ask partnered-turtles with [(color = blue) and (strategy = "predict")] [ predict-blue ] ask partnered-turtles with [(color = red) and (strategy = "predict")] [ predict-red ] end ;partnered-turtle procedure ;turtles probabilistically select a strategy to pick-a-strategy ifelse random-float 1 < prob-s1 [ set strategy1? true ][ set strategy1? false ] end ;player1 partnered-turtle procedure ;looks at the strategy choices and assigns the appropriate utility values to set-utilities ifelse strategy1? = true [ ifelse partner-strategy1? = true [ set utility p1s1-utility set potential-utility p1s2-utility ask partner [ set utility p2s1-utility set potential-utility p2s2-utility ] ][ set utility p1s1-utility2 set potential-utility p1s2-utility2 ask partner [ set utility p2s2-utility set potential-utility p2s1-utility ] ] ][ ifelse partner-strategy1? = true [ set utility p1s2-utility set potential-utility p1s1-utility ask partner [ set utility p2s1-utility2 set potential-utility p2s2-utility2 ] ][ set utility p1s2-utility2 set potential-utility p1s1-utility2 ask partner [ set utility p2s2-utility2 set potential-utility p2s1-utility2 ] ] ] end ;partnered-turtle utility maximizer learning procedure ;if a turtle could've done better by choosing the other strategy, given their partner stays the same, ;then they become more likely to choose the other strategy. If they're happy with their decision, ;they become more likely to make the same decision to utility-max ifelse strategy1? = true [ if utility < potential-utility [ set prob-s1 prob-s1 - (.001 * (potential-utility - utility)) set prob-s2 prob-s2 + (.001 * (potential-utility - utility)) ] ][ if utility < potential-utility [ set prob-s1 prob-s1 + (.001 * (potential-utility - utility)) set prob-s2 prob-s2 - (.001 * (potential-utility - utility)) ] ] end ;partnered-turtle competitive learning procedure ;If a turtle loses to their opponent, then they become less likely to make the same decision. They're ;indifferent to ties though to seek-fair ifelse strategy1? = true [ if utility < [utility] of partner [ set prob-s1 prob-s1 - (.001 * (([utility] of partner) - utility)) set prob-s2 prob-s2 + (.001 * (([utility] of partner) - utility)) ] ][ if utility < [utility] of partner [ set prob-s1 prob-s1 + (.001 * (([utility] of partner) - utility)) set prob-s2 prob-s2 - (.001 * (([utility] of partner) - utility)) ] ] end ;player1 partnered-turtle predictor learning procedure ;They look at whether or not their partner made their best decision. The turtles assume that their partner will tend ;towards making the best decision. They then respond with their best decision. to predict-blue ifelse partner-strategy1? = true [ ifelse [potential-utility] of partner > [utility] of partner [ ifelse p1s1-utility2 > p1s2-utility2 [ set prob-s1 prob-s1 + .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 - .001 * ([potential-utility] of partner - [utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 + .001 * ([potential-utility] of partner - [utility] of partner) ] ][ ifelse p1s1-utility > p1s2-utility [ set prob-s1 prob-s1 + .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 - .001 * ([utility] of partner - [potential-utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 + .001 * ([utility] of partner - [potential-utility] of partner) ] ] ][ ifelse [potential-utility] of partner > [utility] of partner [ ifelse p1s1-utility > p1s2-utility [ set prob-s1 prob-s1 + .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 - .001 * ([potential-utility] of partner - [utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 + .001 * ([potential-utility] of partner - [utility] of partner) ] ][ ifelse p1s1-utility2 > p1s2-utility2 [ set prob-s1 prob-s1 + .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 - .001 * ([utility] of partner - [potential-utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 + .001 * ([utility] of partner - [potential-utility] of partner) ] ] ] end ;player2 partnered-turtle predictor learning procedure to predict-red ifelse partner-strategy1? = true [ ifelse [potential-utility] of partner > [utility] of partner [ ifelse p2s1-utility2 > p2s2-utility2 [ set prob-s1 prob-s1 + .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 - .001 * ([potential-utility] of partner - [utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 + .001 * ([potential-utility] of partner - [utility] of partner) ] ][ ifelse p2s1-utility > p2s2-utility [ set prob-s1 prob-s1 + .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 - .001 * ([utility] of partner - [potential-utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 + .001 * ([utility] of partner - [potential-utility] of partner) ] ] ][ ifelse [potential-utility] of partner > [utility] of partner [ ifelse p2s1-utility > p2s2-utility [ set prob-s1 prob-s1 + .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 - .001 * ([potential-utility] of partner - [utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([potential-utility] of partner - [utility] of partner) set prob-s2 prob-s2 + .001 * ([potential-utility] of partner - [utility] of partner) ] ][ ifelse p2s1-utility2 > p2s2-utility2 [ set prob-s1 prob-s1 + .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 - .001 * ([utility] of partner - [potential-utility] of partner) ][ set prob-s1 prob-s1 - .001 * ([utility] of partner - [potential-utility] of partner) set prob-s2 prob-s2 + .001 * ([utility] of partner - [potential-utility] of partner) ] ] ] end
There are 4 versions of this model.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Game Theory v1.png | preview | Preview for 'Game Theory v1' | over 12 years ago, by Peter Jourgensen | Download |
PeterJourgensen_June3.docx | word | Progress Report 3 | over 12 years ago, by Peter Jourgensen | Download |
PeterJourgensen_May20.docx | word | Progress Report 1 | over 12 years ago, by Peter Jourgensen | Download |
PeterJourgensen_May27.docx | word | Progress Report 2 | over 12 years ago, by Peter Jourgensen | Download |
This model does not have any ancestors.
This model does not have any descendants.