Minority Game HubNet
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
Minority Game is a simplified model of an economic market. In each round agents choose to join one of two sides, 0 or 1. Those on the minority side at the end of a round earn a point. This game is inspired by the "El Farol" bar problem.
Each round, the live participants must choose 0 or 1. They can view their choice history for a specified number of previous turns, and may employ a finite set of strategies to make their decision. The record available to them shows which side, 0 or 1, was in the minority in previous rounds.
This HubNet version of the model allows players to play against each other and a set of androids. The androids' intelligence (and thus the difficulty of the game) can be increased through the ANDROID-MEMORY slider.
HOW IT WORKS
Each player begins with a score of 0 and must choose a side, 0 or 1, during each round. The round ends when all the human participants have made a choice.
Each computer agent begins with a score of 0 and STRATEGIES-PER-AGENT strategies. Each of these strategies is a string of 0 and 1 choices, such as [0 1 1 1 0 0 1] that together represent the agents' possible plan of action (first choose 0, next time choose 1, next time choose 1, etc.). Initially, they choose a random one of these strategies to use. If their current strategy correctly predicted whether 0 or 1 would be the minority, they add one point to their score. Each strategy also earns virtual points according to whether it would have been correct or not. From then on, the agents will use their strategy with the highest virtual point total to predict whether they should select 0 or 1. Thus, for each android, the "fittest" strategies survive.
This strategy consists of a list of 1's and 0's that is 2^ANDROID-MEMORY long. The choice the computer agent then makes is based off of the history of past choices. This history is also a list of 1's and 0's that is ANDROID-MEMORY long, but it is encoded into a binary number. The binary number is then used as an index into the strategy list to determine the choice.
This means that if there are only computer agents and no human participants, once the number of computer agents, the number of strategies, and the length of the historical record are chosen, all parameters are fixed and the behavior of the system is of interest.
HOW TO USE IT
Quickstart Instructions:
Teacher: Follow these directions to run the HubNet activity.
Optional: Zoom In (see Tools in the Menu Bar)
Optional: Change any of the settings. If you did change settings, press the SETUP button.
Teacher: Press the LOGIN button
Everyone: Open up a HubNet Client on your machine and choose a username and connect to this activity.
Teacher: When everyone is logged in press the LOGIN button again and press the GO button when you are ready to start.
Everyone: Choose 0 or 1, when everyone has chosen the view will update to show the relative scores of all the players and androids
Teacher: To run the activity again with the same group, stop the model by pressing the GO button, if it is on. Change any of the settings that you would like.
Press the SETUP button.
Teacher: Restart the simulation by pressing the GO button again.
Teacher: To start the simulation over with a new group, have all the clients log out (or boot them using the KICK button in the Control Center) and press SETUP
Buttons:
SETUP: Resets the simulation according to the parameters set by the sliders all logged-in clients will remain logged-in but their scores will be reset to 0
LOGIN: Allows clients to log in but not to start playing the game.
GO: Starts and stops the model.
Sliders:
NUMBER-OF-PARTICIPANTS: sets the total number of participants in the game, which includes androids and human participants, as clients log in androids will automatically turn into human players. This is to ensure that there is always an odd number of participants in the world so there is always a true minority.
PLAYER-MEMORY: The length of the history the players can view to help choose sides.
ANDROID-MEMORY: Sets the length of the history which the computer agents use to predict their behavior. One gets most interesting between 3 and 12, though there is some interesting behavior at 1 and 2. Note that when using an ANDROID-MEMORY of 1, the
STRATEGIES-PER-AGENT needs to be 4 or less.
STRATEGIES-PER-AGENT: Sets the number of strategies each computer agent has in their toolbox. Five is typically a good value. However, this can be changed for investigative purposes using the slider, if desired.
Monitors:
HIGH SCORE and LOW SCORE show the maximum and minimum scores.
HISTORY: shows the most recent minority values. The number of values shown is determined by the PLAYER-MEMORY slider.
Plots:
SCORES: displays the minimum, maximum, and average scores over time
SUCCESS RATES HISTOGRAM: a histogram of the successes per attempts for players and androids.
NUMBER PICKING ZERO: plots the number of players and androids that picked zero during the last round
SUCCESS RATE: displays the minimum, maximum, and average success rate over time
Quickstart
NEXT >>> - shows the next quick start instruction
<<< PREVIOUS - shows the previous quick start instruction
RESET INSTRUCTIONS - shows the first quick start instruction
Client Interface
Buttons:
0: press this button if you wish to choose 0 for a particular round.
1: press this button if you wish to choose 1 for a particular round.
Monitors:
YOU ARE A: displays the shape and color of your turtle in the view
SCORE: displays how many times you have chosen a value that has been in the minority
SUCCESS RATE: the number of times you have been in the minority divided by the number of selections you have participated in.
LAST CHOICE: the value you chose in the last round
HISTORY: the values that were in the minority in the most recent rounds
CURRENT CHOICE: the value that you have chosen for this current round
CHOSEN-SIDES?: Tells you whether or not you have chosen this round
THINGS TO NOTICE
There are two extremes possible for each turn: the size of the minority is 1 agent or (NUMBER-OF-AGENTS - 1)/2 agents (since NUMBER-OF-AGENTS is always odd). The former would represent a "wasting of resources" while the latter represents a situation which is more "for the common good." However, each agent acts in an inherently selfish manner, as they care only if they and they alone are in the minority. Nevertheless, the latter situation is prevalent in the system without live players. Does this represent unintended cooperation between agents, or merely coordination and well developed powers of prediction?
The agents in the view move according to how successful they are relative to the mean success rate. After running for about 100 time steps (at just about any parameter setting), how do the fastest and slowest agents compare? What does this imply?
Playing against others, what strategies seem to be the most effective? What would happen if you simply chose randomly?
Look at the plot "Success Rates." As the game runs, the success rates converge. Can you explain this? At the time, the graph lines in the plot "Scores" diverge. Why is that?
THINGS TO TRY
What strategy works to maximize your own score?
Would you perform better against only computer agents than against humans?
What strategy works better to try to reach social equity?
EXTENDING THE MODEL
Maybe you could add computer agents with different strategies, or more dynamically evolutionary strategies. Could you figure out a strategy that works the best against these computer agents? You could code in multiple dynamic strategies that play against each other. Who would emerge victorious?
NETLOGO FEATURES
One feature which was instrumental to this program being feasible was the n-values
primitive. When setting up strategies for each computer agent, they are binary numbers (stored in lists) of 2^ANDROID-MEMORY values. If this was done by starting with an empty list and using fput
2^ANDROID-MEMORY times, for each agent and for each strategy, during setup you would need to use fput
NS(2^ANDROID-MEMORY) times. Using n-values
sped this up by about 2 or 3 orders of magnitude.
The list primitives map
and reduce
were also used to simplify code.
RELATED MODELS
Prisoner's Dilemma
Altruism
Cooperation
El Farol
Restaurants
CREDITS AND REFERENCES
Original implementation: Daniel B. Stouffer, for the Center for Connected Learning and Computer-Based Modeling.
This model was based upon studies by Dr. Damien Challet et al. Information can be found on the web at http://www.unifr.ch/econophysics/minority/
Challet, D. and Zhang, Y.-C. Emergence of Cooperation and Organization in an Evolutionary Game. Physica A 246, 407 (1997).
Zhang, Y.-C. Modeling Market Mechanism with Evolutionary Games. Europhys. News 29, 51 (1998).
HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
- Wilensky, U. (2004). NetLogo HubNet Minority Game HubNet model. http://ccl.northwestern.edu/netlogo/models/HubNetMinorityGameHubNet. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
- Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
COPYRIGHT AND LICENSE
Copyright 2004 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
This activity and associated models and materials were created as part of the projects: PARTICIPATORY SIMULATIONS: NETWORK-BASED DESIGN FOR SYSTEMS LEARNING IN CLASSROOMS and/or INTEGRATED SIMULATION AND MODELING ENVIRONMENT. The project gratefully acknowledges the support of the National Science Foundation (REPP & ROLE programs) -- grant numbers REC #9814682 and REC-0126227.
Comments and Questions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Variable and Breed declarations ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; globals [ ;; these may be different because the memory of players and androids may be different player-history ;; the history of which number was the minority (encoded into a binary number), seen by the players android-history ;; the history of which number was the minority (encoded into a binary number), seen by the androids minority ;; the current number in the minority avg-score ;; keeps track of all turtles' average score stdev-score ;; keeps track of the standard deviation of the turtles' scores avg-success ;; keeps track of all turtles' average success ;; lists used to create the various turtles shape-names ;; shapes available for players colors ;; colors available for players color-names ;; names of colors available for players used-shape-colors ;; shape-color combinations used ;; quick start instructions and variables quick-start ;; current quickstart instruction displayed in the quickstart monitor qs-item ;; index of the current quickstart instruction qs-items ;; list of quickstart instructions score-list ;; for plotting success-list ;; for plotting ] turtles-own [ score ;; each turtle's score choice ;; each turtle's current choice either 1 or 0 ] ;; each client is represented by a player turtle breed [ players player ] players-own [ user-id ;; name entered by the user in the client chosen-sides? ;; true/false to tell if player has made current choice we don't move to the next round until everyone has chosen choices-made ;; the number of choices each turtle has made ] ;; androids are players in the game that are controlled by the computer breed [ androids android ] androids-own [ strategies ;; each android's strategies (a list of lists) current-strategy ;; each android's current strategy (index for the above list) strategies-scores ;; the accumulated virtual scores for each of the turtle's strategies ] ;;;;;;;;;;;;;;;;;;;;; ;; Setup Procedures ;;;;;;;;;;;;;;;;;;;;; ;; startup of the model to startup clear-all setup hubnet-reset end ;; setup for the overall program, will require clients to re-login to setup ;; prevent an infinite loop from occurring in assign-strategies if (android-memory = 1 and strategies-per-android > 4 ) [ user-message word "You need to increase the memory variable or\n" "decrease the strategies-per-agent variable" stop ] setup-quick-start initialize-system initialize-androids ask patches with [pxcor = 0 ] [ set pcolor white ] ask players [ clear-my-data set ycor 0 ] set score-list map [ [score] of ? ] sort turtles update-success-list clear-all-plots reset-ticks end ;; initializes system variables to initialize-system ;; when generating a random history to start out with ;; first fill the longer memory then take a subsection ;; of that memory and give it to the other group let temp-history random (2 ^ (max list player-memory android-memory)) ifelse player-memory >= android-memory [ set player-history temp-history set android-history (reduce-memory (full-history temp-history player-memory) android-memory) ] [ set android-history temp-history set player-history (reduce-memory (full-history temp-history android-memory) player-memory) ] reset-ticks set avg-score 0 set stdev-score 0 set shape-names [ "airplane" "bug" "butterfly" "car" "fish" "monster" "star" "turtle" "bird" "crown" "ufo" "sun" "train" ] set colors [ white brown yellow green sky violet orange pink red ] set color-names [ "white" "brown" "yellow" "green" "blue" "purple" "orange" "pink" "red" ] set used-shape-colors [] end ;; given a list reduce the length to that of the given ;; memory and return that list as a decimal number to-report reduce-memory [history memory] report decimal sublist history (length history - memory) (length history - 1) end ;; reports the history in binary format (with padding if needed) to-report full-history [ agent-history agent-memory ] let full binary agent-history while [length full < agent-memory] [ set full fput 0 full ] report full end ;; converts a decimal number to a binary number (stored in a list of 0's and 1's) to-report binary [ decimal-num ] let binary-num [] loop [ set binary-num fput (decimal-num mod 2) binary-num set decimal-num int (decimal-num / 2) if (decimal-num = 0) [ report binary-num ] ] end ;; converts a binary number (stored in a list of 0's and 1's) to a decimal number to-report decimal [ binary-num ] report reduce [(2 * ?1) + ?2] binary-num end ;; remove existing turtles and create number-of-androids androids to initialize-androids ask androids [ die ] set-default-shape androids "person" create-androids (number-of-participants - count players) [ set color gray set xcor random-xcor set heading 0 assign-strategies set current-strategy random strategies-per-android set choice item android-history (item current-strategy strategies) set score 0 set strategies-scores n-values strategies-per-android [0] ] let num-picked-zero count turtles with [choice = 0] ifelse (num-picked-zero <= (count turtles - 1) / 2) [ set minority 0 ] [ set minority 1 ] set score-list map [ [score] of ? ] sort turtles setup-plots end ;; gives the androids their allotted number of unique strategies to assign-strategies ;; android procedure let temp-strategy false set strategies [] repeat strategies-per-android [ ;; make sure there are no duplicate strategies in the list set temp-strategy create-strategy while [ member? temp-strategy strategies ] [ set temp-strategy create-strategy ] set strategies fput temp-strategy strategies ] end ;; creates a strategy (a binary number stored in a list of ;; length 2 ^ android-memory) to-report create-strategy report n-values (2 ^ android-memory) [random 2] end ;; reset a player to some initial values to clear-my-data ;; players procedure set xcor random-xcor set choice random 2 set score 0 set chosen-sides? false set choices-made 0 send-info-to-clients update-client end ;;;;;;;;;;;;;;;;;;;;;; ;; Runtime Procedures ;;;;;;;;;;;;;;;;;;;;;; to go every 0.1 [ ;; get commands and data from the clients listen-clients ;; determine if the system should be updated (advanced in time) ;; that is, if every player has chosen a side for this round if any? turtles and not any? players with [ not chosen-sides? ] [ update-system update-scores-and-strategies advance-system update-choices update-success-list set score-list map [ [score] of ? ] sort turtles update-plots let scores [score] of turtles ask turtles [ move max scores min scores ] ask players [ set chosen-sides? false update-client ] tick ] ] end to update-success-list set success-list map [ [ score / choices-made ] of ? ] sort players with [ choices-made > 0 ] if ticks > 0 [ set success-list sentence success-list map [ [score / ticks] of ? ] sort androids ] end ;; updates system variables such as minority, avg-score, and stdev-score globals to update-system let num-picked-zero count turtles with [choice = 0] ifelse num-picked-zero <= (count turtles - 1) / 2 [ set minority 0 ] [ set minority 1 ] set-current-plot "Number Picking Zero" plot num-picked-zero set avg-score mean [score] of turtles set stdev-score standard-deviation [score] of turtles if ticks > 0 [ set avg-success mean (sentence [score / ticks] of androids [score / choices-made] of players with [ choices-made > 0 ]) ] end ;; ask all participants to update their strategy and scores to update-scores-and-strategies ask androids [ update-androids-scores-and-strategies ] ask players [ update-score ] end ;; updates android's score and their strategies' virtual scores to update-androids-scores-and-strategies ;; androids procedure ;; here we use MAP to simultaneously walk down both the list ;; of strategies, and the list of those strategies' scores. ;; ?1 is the current strategy, and ?2 is the current score. ;; For each strategy, we check to see if that strategy selected ;; the minority. If it did, we increase its score by one, ;; otherwise we leave the score alone. set strategies-scores (map [ ifelse-value (item android-history ?1 = minority) [?2 + 1] [?2] ] strategies strategies-scores) let max-score max strategies-scores let max-strategies [] let counter 0 ;; this picks a strategy with the largest virtual score foreach strategies-scores [ if ? = max-score [ set max-strategies lput counter max-strategies ] set counter counter + 1 ] set current-strategy one-of max-strategies update-score end ;; if the turtle is in the minority, increase its score to update-score ;; turtle procedure if choice = minority [ set score score + 1 ] end ;; advances the system forward in time and updates the history to advance-system ;; remove the oldest entry in the memories and place the new one on the end set player-history decimal (lput minority but-first full-history player-history player-memory) set android-history decimal (lput minority but-first full-history android-history android-memory) ;; send the updated info to the clients ask players [ hubnet-send user-id "history" full-history player-history player-memory ] end ;; ask all participants to update their choice to update-choices update-androids-choices ask players [ update-client ] end ;; ask the androids to pick a new choice to update-androids-choices ask androids [ set choice (item android-history (item current-strategy strategies)) ] end ;; move turtles according to their success (a visual aid to see their collective behavior) to move [low-score high-score] if low-score != high-score [ set ycor (((score - low-score) / (high-score - low-score )) * (world-height - 1) ) + min-pycor ] ifelse choice = 0 [ if xcor > 0 [ set xcor random-float (min-pxcor + 1) - 1 ] ] [ if xcor < 0 [ set xcor random-float (max-pxcor - 1) + 1 ] ] end ;;;;;;;;;;;;;;;;;;;;;; ;; HubNet Procedures ;;;;;;;;;;;;;;;;;;;;;; ;; listen for hubnet client activity to listen-clients while [hubnet-message-waiting?] [ hubnet-fetch-message ifelse hubnet-enter-message? [ execute-create ] [ ifelse hubnet-exit-message? [ ;; when players log out we don't kill off the turtles ;; instead we just turn them into androids since it's ;; important to have an odd number of players. This keeps ;; the total population constant ask players with [user-id = hubnet-message-source] [ set breed androids set color gray assign-strategies set current-strategy random strategies-per-android set choice item android-history (item current-strategy strategies) set strategies-scores n-values strategies-per-android [0] set score 0 set size 1 display ] ] [ if hubnet-message-tag = "0" [ choose-value 0 ] if hubnet-message-tag = "1" [ choose-value 1 ] ] ] ] end ;; create a client player upon login to execute-create ;; to make sure that we always have an odd number of ;; participants so there is always a true minority ;; so just change one of the androids into a player ;; (you can only create an odd number of androids) ;; if there aren't enough androids make two and update ;; the slider. if not any? androids [ create-androids 2 [ set heading 0 set xcor random-xcor ] set number-of-participants number-of-participants + 2 ] ask one-of androids [ set breed players set user-id hubnet-message-source set size 2 set-unique-shape-and-color clear-my-data ] display end ;; assigns a shape that is not currently in use to ;; a player turtle to set-unique-shape-and-color ;; player procedure let max-possible-codes (length colors * length shape-names) let code random max-possible-codes while [member? code used-shape-colors and count turtles < max-possible-codes] [ set code random max-possible-codes ] set used-shape-colors (lput code used-shape-colors) set shape item (code mod length shape-names) shape-names set color item (code / length shape-names) colors end ;; to tell the clients what they look like to send-info-to-clients ;; player procedure hubnet-send user-id "You are a:" identity end ;; report the string version of the turtle's identity (color + shape) to-report identity ;; turtle procedure report (word (color-string color) " " shape) end ;; report the string version of the turtle's color to-report color-string [color-value] report item (position color-value colors) color-names end ;; send information to the clients to update-client ;; player procedure hubnet-send user-id "chosen-sides?" chosen-sides? hubnet-send user-id "last choice" choice hubnet-send user-id "current choice" choice hubnet-send user-id "history" full-history player-history player-memory hubnet-send user-id "score" score hubnet-send user-id "success rate" precision ifelse-value (choices-made > 0) [ score / choices-made ] [ 0 ] 2 end ;; the client chooses 0 or 1 to choose-value [ value-chosen ] ask players with [user-id = hubnet-message-source] [ if not chosen-sides? [ hubnet-send user-id "last choice" choice set choice value-chosen set chosen-sides? true set choices-made choices-made + 1 hubnet-send user-id "current choice" choice hubnet-send user-id "chosen-sides?" chosen-sides? ] ] end ;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; Quick Start Procedures ;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; instructions to quickly setup the model, and clients to run this activity to setup-quick-start set qs-item 0 set qs-items [ "Teacher: Follow these directions to run the HubNet activity." "Optional: Zoom In (see Tools in the Menu Bar)" "Optional: Change any of the settings...." "If you did change settings, press the SETUP button." "Teacher: Press the LOG-IN button." "Everyone: Open up a HubNet Client on your machine and..." "choose a user-name and..." "connect to this activity." "Teacher: Once everyone has started their client..." "press the LOG-IN button, then press GO." "Everyone: Watch your clock and choose 0 or 1." "Teacher: To rerun the activity with the same group,..." "stop the model by pressing the GO button, if it is on." "Change any of the settings that you would like." "Press the RE-RUN button." "Teacher: Restart the simulation by pressing the GO button again." "Teacher: To start the simulation over with a new group,..." "stop the model by pressing the GO button, if it is on..." "and follow these instructions again from the beginning." ] set quick-start (item qs-item qs-items) end ;; view the next item in the quickstart monitor to view-next set qs-item qs-item + 1 if qs-item >= length qs-items [ set qs-item length qs-items - 1 ] set quick-start (item qs-item qs-items) end ;; view the previous item in the quickstart monitor to view-previous set qs-item qs-item - 1 if qs-item < 0 [ set qs-item 0 ] set quick-start (item qs-item qs-items) end ; Copyright 2004 Uri Wilensky. ; See Info tab for full copyright and license.
There are 7 versions of this model.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Minority Game HubNet.png | preview | Preview for 'Minority Game HubNet' | over 11 years ago, by Uri Wilensky | Download |
This model does not have any ancestors.
This model does not have any descendants.