Minority Game
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This is a simplified model of an economic market. In each time step, agents choose one of two sides, 0 or 1, and those on the minority side win a point. This problem is inspired by the "El Farol" bar problem. Each agent uses a finite set of strategies to make their decision based upon past record; however, the record consists only of which side, 0 or 1, was in the minority, not the actual population count of how many chose each side.
HOW IT WORKS
Each agent begins with a score of 0 and STRATEGIES-PER-AGENT strategies. Initially, they choose a random one of these strategies to use. The initial historical record is generated randomly. If their current strategy correctly predicted whether 0 or 1 would be the minority, they add one point to their score. Each strategy also earns virtual points according to if it would have been correct or not. From then on, the agents will then use their strategy with the highest virtual point total to predict whether they should select 0 or 1.
This strategy consist of a list of 1's and 0's that is 2MEMORY long. The choice the turtle then makes is based off of the history of past choices. This history is also a list of 1's and 0's that is MEMORY long, but it is encoded into a binary number. The binary number is then used as an index into the strategy list to determine the choice.
This means that once the number of agents, the number of strategies, and the length of the historical record are chosen, all parameters are fixed and the behavior of the system is of interest.
HOW TO USE IT
GO: Starts and stops the model.
SETUP: Resets the simulation according to the parameters set by the sliders.
NUMBER: Sets the number of agents to participate. This is always odd to insure a minority.
MEMORY: Sets the length of the history which the agents use to predict their behavior. Most interesting between 3 and 12, though there is some interesting behavior at 1 and 2. Note that when using a MEMORY of 1, the STRATEGIES-PER-AGENT needs to be 4 or less.
STRATEGIES-PER-AGENT: Sets the number of strategies each agent has in their toolbox. Five is typically a good value. However, this can be changed for investigative purposes using the slider, if desired.
COLOR-BY: "Choice" represents the agents changing their colors depending on if they have chosen 0 (red) or 1 (blue). "Success" represents the agents changing their color depending upon their success rate (the number of times they have been in the minority divided by the number of selections). An agent is green if they are within one standard deviation above or below the mean success rate, red if they are more than one standard deviation above, and blue if they are more than one standard deviation below.
RECOMMENDED SETTINGS: NUMBER=501, MEMORY=6, STRATEGIES-PER-AGENT=5 (Should be loaded by default)
CAUTION: Beware setting the MEMORY slider to higher values. It scales exponentially (2MEMORY), however this only has an effect when SETUP is run. This means that for each increased unit of MEMORY, it takes twice as long for SETUP to run.
THINGS TO NOTICE
There are two extremes possible for each turn: the size of the minority is 1 agent or (NUMBER-1)/2 agents (since NUMBER is always odd). The former would represent a "wasting of resources", while the latter represents a situation which is more "for the common good." However, each agent acts in an inherently selfish manner, as they care only if they and they alone are in the minority. Nevertheless, the latter situation is prevalent. Does this represent unintended cooperation between agents, or merely coordination and well developed powers of prediction?
The agents move according to how successful they are relative to the mean success rate. After running for about 100 time steps (at just about any parameter setting), how do the fastest and slowest agents compare? What does this imply?
THINGS TO TRY
Notice how the population of agents choosing 0 stays close to NUMBER/2. How do the deviations change as you change the value of MEMORY?
How do things change if you keep everything the same but change the STRATEGIES-PER-AGENT?
EXTENDING THE MODEL
There are a few evolutionary possibilities for this model which could be coded.
(1) Maybe after some (long) amount of time, the least successful agent is replaced by a clone of the most successful agent, with zeroed scores and possibly mutated strategies. How would things change then?
(2) Similar to (1), you could start the agents with a very small memory value, and again replace the least successful agent with a clone of the most successful agent. But this time instead of just zeroing the scores and giving mutated strategies, you also add or subtract one unit of memory for the new agent. What would happen here? Would their brains continue to get bigger or find some happy value? Would people with small memory be altogether eliminated, or would they survive (maybe even still thrive)?
NETLOGO FEATURES
The n-values
primitive is used to set up strategies for each player.
The primitives map
and reduce
were also used to simplify code.
RELATED MODELS
- any of the Prisoner's Dilemma models
- Altruism
- Cooperation
CREDITS AND REFERENCES
Original implementation: Daniel B. Stouffer, for the Center for Connected Learning and Computer-Based Modeling.
This model was based upon studies by Dr. Damien Challet, et al.
Information can be found on the web at http://www.unifr.ch/econophysics/minority/
Challet, D. and Zhang, Y.-C. Emergence of Cooperation and Organization in an Evolutionary Game. Physica A 246, 407 (1997).
Zhang, Y.-C. Modeling Market Mechanism with Evolutionary Games. Europhys. News 29, 51 (1998).
HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
- Wilensky, U. (2004). NetLogo Minority Game model. http://ccl.northwestern.edu/netlogo/models/MinorityGame. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
- Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
COPYRIGHT AND LICENSE
Copyright 2004 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
This model was created as part of the projects: PARTICIPATORY SIMULATIONS: NETWORK-BASED DESIGN FOR SYSTEMS LEARNING IN CLASSROOMS and/or INTEGRATED SIMULATION AND MODELING ENVIRONMENT. The project gratefully acknowledges the support of the National Science Foundation (REPP & ROLE programs) -- grant numbers REC #9814682 and REC-0126227.
Comments and Questions
globals [ history ;; the history of which number was the minority (encoded into a binary number) minority ;; the current number in the minority avg-score ;; keeps track of the turtles' average score stdev-score ;; keeps tracks of the standard deviation of the turtles' scores ] turtles-own [ score ;; each turtle's score choice ;; each turtle's choice strategies ;; each turtle's strategies (a list of lists) current-strategy ;; each turtle's current strategy (index in above list) strategies-scores ;; the accumulated virtual scores for each of the turtle's strategies (a list) ] ;; setup procedure to setup clear-all if (memory = 1 and strategies-per-agent > 4 ) ;; prevent an infinite loop from occurring [ user-message word "You need to increase the memory variable or\n" "decrease the strategies-per-agent variable" stop ] initialize-system initialize-turtles update-system reset-ticks end ;; resets state variables to initialize-system set history random (2 ^ memory) set avg-score 0 set stdev-score 0 end ;; creates the specified number of turtles to initialize-turtles crt number [ setxy 0 (world-height * who / number) ;; disperse over the y-axis set heading 90 assign-strategies set current-strategy random strategies-per-agent set choice item history (item current-strategy strategies) ifelse (color-by = "choice") [ recolor-by-choice ] [ set color green ] ;; we initially set all to green to prevent divide by zero error set score 0 set strategies-scores n-values strategies-per-agent [0] ] end ;; gives the turtles their allotted number of unique strategies to assign-strategies ;; turtle procedure set strategies [] while [ length remove-duplicates strategies < strategies-per-agent ] [ set strategies n-values strategies-per-agent [create-strategy] ] end ;; reports a random strategy (a list of 1 or 0's) to-report create-strategy report n-values (2 ^ memory) [random 2] end to go ask turtles [ update-scores-and-strategy ] set history decimal (lput minority but-first full-history) ask turtles [ update-choice-and-color ] tick update-system move-turtles end ;; moves the turtles about the world (a visual aid to see their collective behavior) to move-turtles ask turtles [ fd score / avg-score ] end ;; updates minority, avg-score, and stdev-score globals to update-system let num-picked-zero count turtles with [choice = 0] ifelse (num-picked-zero <= (number - 1) / 2) [ set minority 0 ] [ set minority 1 ] ;; plot this here for speed or optimization set-current-plot "Number Picking Zero" plot num-picked-zero set avg-score mean [score] of turtles set stdev-score standard-deviation [score] of turtles end ;; updates turtle's score and their strategies' virtual scores to update-scores-and-strategy ;; turtles procedure increment-scores let max-score max strategies-scores let max-strategies [] let counter 0 ;; this picks a strategy with the largest virtual score foreach strategies-scores [ if (? = max-score) [ set max-strategies lput counter max-strategies ] set counter counter + 1 ] set current-strategy one-of max-strategies if (choice = minority) [ set score score + 1 ] end ;; this increases the virtual scores of each strategy ;; that selected the minority to increment-scores ;; turtles procedure ;; here we use MAP to simultaneously walk down both the list ;; of strategies, and the list of those strategies' scores. ;; ?1 is the current strategy, and ?2 is the current score. ;; For each strategy, we check to see if that strategy selected ;; the minority. If it did, we increase its score by one, ;; otherwise we leave the score alone. set strategies-scores (map [ifelse-value (item history ?1 = minority) [?2 + 1] [?2]] strategies strategies-scores) end ;; updates turtle's choice and re-colors them to update-choice-and-color ;; turtles procedure set choice (item history (item current-strategy strategies)) ifelse (color-by = "choice") [ recolor-by-choice ] [ recolor-by-success ] end to recolor-by-choice ;; turtles procedure ifelse (choice = 0) [ set color red ] [ set color blue ] end to recolor-by-success ;; turtles procedure ifelse (score > avg-score + stdev-score) [ set color red ] [ ifelse (score < avg-score - stdev-score) [ set color blue ] [ set color green ] ] end ;; reports the history in binary format (with padding if needed) to-report full-history report sentence n-values (memory - length binary history) [0] (binary history) end ;; converts a decimal number to a binary number (stored in a list of 0's and 1's) to-report binary [decimal-num] let binary-num [] loop [ set binary-num fput (decimal-num mod 2) binary-num set decimal-num int (decimal-num / 2) if (decimal-num = 0) [ report binary-num ] ] end ;; converts a binary number (stored in a list of 0's and 1's) to a decimal number to-report decimal [binary-num] report reduce [(2 * ?1) + ?2] binary-num end ; Copyright 2004 Uri Wilensky. ; See Info tab for full copyright and license.
There are 10 versions of this model.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Minority Game.png | preview | Preview for 'Minority Game' | over 11 years ago, by Uri Wilensky | Download |
This model does not have any ancestors.
This model does not have any descendants.