DiffusionOfLanguage

No preview image

1 collaborator

Tags

diffusion 

"subject area"

Tagged by Forrest Stonedahl over 15 years ago

linguistics 

"subject area"

Tagged by Forrest Stonedahl over 15 years ago

networks 

"genre of modeling"

Tagged by Forrest Stonedahl over 15 years ago

research 

"purpose"

Tagged by Forrest Stonedahl over 15 years ago

scale-free 

"one of the network topologies used is scale-free"

Tagged by Forrest Stonedahl about 15 years ago

Visible to everyone | Changeable by everyone
Model was written in NetLogo 4.0.2 • Viewed 782 times • Downloaded 39 times • Run 3 times
Download the 'DiffusionOfLanguage' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


VERSION

$Id: DiffusionOfLanguage.nlogo 39017 2008-03-29 22:31:47Z fjs750 $

WHAT IS IT?

This is a linguistics model about how a language change may (or may not) diffuse through a social network. The key research question that it is interested in investigating is this: How can initially rare grammar variants become dominant in a population, without any global bias in their favor? It is known that such changes can and do occur in the real world - but what conditions are necessary to produce this behavior? This model demonstrates that the behavior can be reproduced through the use of simple cognitively-motivated agent rules combined with a social network structure and certain distributions of heterogeneous bias in the population. The language cascade occurs despite the fact that all of the agents' biases sum to 0.

While the model was developed for linguistics, there are potentially useful lessons to be learned about the interactions of heterogeneous agents in a social network, which may perhaps be applied to other disciplines, such epidemiology, or the diffusion of innovation in marketing.

HOW IT WORKS

In this model, there are two opposing grammar variants (G0 and G1) in the population. Each agent's grammar value lies on the range between 0.0 and 1.0. The value 0.0 means that the agent only speaks grammar variant G0, whereas 1.0 means that the agent only speaks grammar variant G1. For grammar values between 0.0 and 1.0, an agent may speak either G0 or G1, with some probability. The probability is determined by a "production function", the shape of which depends on the CATEGORICALNESS parameter, as well as a 'bias' which can vary between agents (this 'bias' may be distributed in various ways, as shall be discussed in more detail later). It is called a "production" function because it maps a tendency toward one grammar or another into a probability of producing a token for one grammar or the other. If CATEGORICALNESS = 0, the production function is linear, meaning that agents produce G1 tokens with probability given directly by their grammar value, and G0 tokens otherwise. If CATEGORICALNESS > 0

the production function is nonlinear (in particular, sigmoidal). The agent's bias determines the point at which the production function crosses the line Y = X, which may be considered repelling point, because if the agent's grammar value is below this repelling point and the agent were talking only to itself, it would eventually end up with grammar value 0.0, but if the grammar value started above this point, it would eventually end up at grammar value 1.0. The larger the CATEGORICALNESS parameter, the closer the sigmoidal production function is to a step function, and at CATEGORICALNESS = 100, the production function actually becomes a step function. This means that if the agents grammar value is above a point (determined by its bias) it will only speak G1, and if it is below that point, it will only speak G0. In this case, agents are completely categorical about their speech, and are unwilling to mix the usage of two the two competing grammars.

Over time each agent updates the state of its internal grammar value based on the tokens it is hearing from neighboring agents in the social network. More specifically, in each tick, each agent first produces a single token probabilistically, based on their grammar state and their production function. Each agent then updates their grammar state to be closer to the mean grammar value that they heard from all of their neighbors. We use what is sometimes called "alpha learning", whereby the new grammar state is a weighted average of the old grammar state with the mean of all the tokens produced by the neighbors. Thus, high degree nodes (agents) in the network (which we refer to as "influentials") are considered to be "heard" by many more nodes than low-degree nodes. However, the LEARNING-RATE (rate of change from the current grammar state toward the perceived grammar of the neighbors) of all of the nodes is the same.

As an example, an agent that start with grammar value 1.0 will certainly produce a G1 grammar token in the first tick of the model. After one tick, it may have heard G0 tokens from their neighbors, and have adjusted their grammar value downward, meaning that the probability of producing G1 is no longer 100%. However, if the LEARNING-RATE is not too large, the agent's grammar value will probably still be quite high, which corresponds to a high likelihood of producing a G1 token in the next tick. However, over time the grammar value may undergo significant changes.

HOW TO USE IT /

MODEL PARAMETERS

While the basic mechanics of the model are described simply above, there are numerous parameters, and ways to initialize or setup the model, to address different questions.

Here is a brief explanation of each parameter of control for the model, and how they related to the initialization and running of the model.

The social network structure (NETWORK-TYPE) may be initialized in several ways:

* "spatial" causes nearby agents (in Euclidean space) to be linked together

* "random" refers to Erdos-Renyi random graphs

* "preferential" refers to the Barabasi-Albert preferential attachment method of creating scale-free networks. The method has been extended slightly to handle the creation of networks with odd average degree, by probabilistically choosing to add either K or K+1 edges as each new node is attached to the network.

* "two-communities" consists of two "preferential" networks loosely connected to each other by some number of links (specified by the INTERCOMMUNITYLINKS parameter).

The network is created with the specified NUMBER-OF-NODES and AVERAGE-NODE-DEGREE.

By default, nodes start with an internal grammar value of 0.0, meaning they have no chance of ever using variant G1. The NUM-START-WITH-G1 parameter, however, controls the number of nodes in the network that start with grammar value 1.0.

If START-TARGET = "none", the agents are randomly chosen to start with grammar value 1.0. But if START-TAGET = "influentials", then the 1.0 grammar value is assigned by starting with the START-TARGET-RANK most influential agent and going down in order. For instance, if START-TARGET-RANK = 9, and NUM-START-WITH-G1 = 3, then the 10th, 11th, and 12th most influential agents (highest-degree nodes) will be assigned grammar value 1.0.

Each agent is assigned a bias toward one grammar variant or the other. The bias can range from +0.5 (strongly supporting G1) to -0.5 (strongly supporting G0). If BIAS-DIST = "flat", then all agents are assigned the same bias. If BIAS-DIST = "uniform-symmetric", then the biases are chosen symmetrically in pairs (X and -X) from a uniform distribution between -0.5 and 0.5. If BIAS-DIST = "normal-symmetric", then the biases are chosen symmetrically in pairs (X and -X) from a normal distribution, centered around 0, and with the log (base 10) of the standard deviation given by BIAS-STDEV-LOG10 parameter. The distribution is truncated at -0.5 and 0.5 (if a value is out of range, we redraw from the distribution).

Additionally, all agents' biases are affected by the GLOBAL-BIAS parameter.

The BIAS-TARGET parameter controls how bias is distributed in the social network. If BIAS-TARGET = "none", then bias is randomly distributed. If BIAS-TARGET = "nearby", then bias is distributed in sorted order (positive bias down to negative) starting with the most influential agent, down to the least influential agent. If BIAS-TARGET = "nearby", then bias is distributed in sorted order outward from a random one of the agents that is starting with the G1 grammar. This last method has the effect of creating a very favorable initial audience for this G1 speakers, and (from our experiments) appears to greatly improve the chances of a language cascade.

The preceding discussion is most relevant for the "spatial", "random", and "preferential" network types. The grammar states and biases for the "two-communities" network-type are initialized according to different rules. In this case, two "preferential" network communities are created - one consisting initially of all G0 speakers and the other consisting of all G1 speakers. The COMA-START and COMB-START parameters control whether the bias is distributed in such a way that the community is more ripe for a language cascade to occur, or more resistant against change to the status quo. More specifically, in each community, the biases are distributed outward from a random node in sorted order (either up, or down, depending). In Community A, if the bias is distributed outward starting with positive bias (supporting G1) down to negative bias, then the network will be more "ripe" for a G1 cascade. On the other hand, distributing bias from negative bias (supporting G0) outward to positive bias will create a configuration that is more resistant to change. For Community B (which starts with G1 prevalent) the situation is reversed, but otherwise exactly the same.

The links between these two communities are chosen based on the COMA-BRIDGE-BIAS and COMB-BRIDGE-BIAS parameters. If COMA-BRIDGE-BIAS = 0, then the agents in Community A that are most biased towards G0 will be chosen as "bridge" nodes - meaning they will be linked to the other community. If COMA-BRIDGE-BIAS = 1, then the agents most biased towards G1 will be bridge nodes. Similarly, COMB-BRIDGE-BIAS determines which nodes will be bridge nodes in Community B.

As mentioned above, the CATEGORICALNESS parameter affects the degree to which nodes are willing to speak the two grammar variants interchangeably, rather than having a stronger preference to speak consistently, or semi-categorically.

The LEARNING-RATE parameter controls the rate at which agents speaking affects other agents internal grammar values. The grammar value of each agent is updated by a weighted sum of the old grammar value and the mean heard grammar of its neighbors. The LEARNING-RATE is the weight given to new data, while (1 - LEARNING-RATE) is the weight given to the old grammar value.

The PROBABALISTIC-SPEECH? parameter controls whether agents always speak 0 or 1 tokens probabilistically (ON), or else speak the real-valued numbers between 0 and 1 produced by their production functions (OFF). The default is for PROBABALISTIC-SPEECH? to be ON. However, turning it OFF could correspond to a longer iterated batch learning process. In many ways, turning it OFF has the effect of removing some noise from the system, and causing faster convergence to an equilibrium. However, the noise *can* be crucial in certain situations, and the behavior will be different. There may be some interesting avenues for further research here...

The VISUALS? parameter turns on or off the visual display. Turning VISUALS? OFF can help speed up the runs when running experiments. It will not effect the outcome of the model at all.

The COLOR-BY-BIAS and COLOR-BY-GRAMMAR buttons affect the visualization of the network, scaling low values (0.0 grammar, or -0.5 bias) to black, and high values (1.0 grammar, 0.5 bias) to white.

The LAYOUT button can be used to try to improve the visual aesthetics of the network layout. Note that this only affects visualization, and does not affect the model itself.

The SETUP button initializes the model, and the GO button runs the simulation until it has converged (or forever, if it does not converge). The STEP button goes one tick at a time.

Various monitors show statistics (min, max, mean) for the grammar values or grammar biases. The "GRAMMAR STATE" plot also plots the mean internal grammar value of the agents over time, as well as the mean spoken value.

THINGS TO NOTICE

THINGS TO TRY

EXTENDING THE MODEL

RELATED MODELS

Language Change (by Celina Troutman)

CREDITS AND REFERENCES

Written by Forrest Stonedahl, in collaboration with Janet Pierrehumbert and Robert Daland.

Comments and Questions

What do you think? (Question)

This model is work I've been doing with Janet Pierrehumbert on modeling linguistics. Is it too confusing currently, after having read the infotab?

Posted over 15 years ago

Click to Run Model

breed [ nodes node ]
nodes-own
[ 
  grammar
  grammar-bias
  spoken-val
]
globals [ 
  seed 
  initial-fraction-influenced-by-minority
  ]

to setup [ rseed ]
  ca
  set seed rseed
  random-seed seed
  with-local-randomness [  ask patches [set pcolor cyan - 3 ]  ]
  setup-nodes
  ifelse network-type = "two-communities"
  [
    ; set up community 1
    let nodesetA (nodes with [ who < (number-of-nodes / 2) ])
    setup-preferential-network nodesetA average-node-degree
    ; start all nodes with grammar 0
    ask nodesetA [ set grammar 0.0 ]
    ; If comA-start = "resistant", we start by having some node strongly
    ; biased towards grammar 0, and distribute negative bias outward from that node.
    ;  - This is as if there was already a 0 cascade that succeeded.
    ; If comA-start = "ripe", we start by having some node strongly biased
    ; toward grammar 1, and distribute positive bias outward from that node.
    ;  - In this case the community is "ripe" for a 1s cascade.
    setup-biases nodesetA (one-of nodesetA) (comA-start = "resistant")
    ask nodesetA [  set xcor (xcor / 2) - (world-width / 4)  ]    
    let nodesetB (nodes with [ who >= (number-of-nodes / 2) ])
    setup-preferential-network nodesetB average-node-degree
    ; start all nodes with grammar 1
    ask nodesetB [ set grammar 1.0 ]
    ; see note above about comA-start: comB start is similar.
    setup-biases nodesetB (one-of nodesetB) (comB-start != "resistant")
    ask nodesetB [ set xcor (xcor / 2) + (world-width / 4)  ]
    
    ; sort from low bias (against grammar 1) to high bias (in favor of grammar 1)
    let nodelistA sort-by [ [grammar-bias] of ?1 < [grammar-bias] of ?2 ] nodesetA
    let nodelistB sort-by [ [grammar-bias] of ?1 < [grammar-bias] of ?2 ] nodesetB
    if (comA-bridge-bias = 1)
    [ set nodelistA reverse nodelistA ]
    if (comB-bridge-bias = 1)
    [ set nodelistB reverse nodelistB ]
    
    repeat intercommunitylinks
    [
      ask (first nodelistA) [ create-link-with (first nodelistB) ]   
      set nodelistA but-first nodelistA
      set nodelistB but-first nodelistB
    ]
    
  ]
  [  
    if (network-type = "random")
    [ setup-random-network ]
    if (network-type = "spatial")
    [  setup-spatially-clustered-network ]
    if (network-type = "preferential")
    [  setup-preferential-network nodes average-node-degree ]
    
    ifelse (start-target = "influentials")
    [
      let sortednodes sort-by [[count link-neighbors] of ?1 > [count link-neighbors] of ?2 ] nodes
      repeat start-target-rank
       [ set sortednodes but-first sortednodes ]
      ask (turtle-set sublist sortednodes 0 num-start-with-G1) [ set grammar 1 ]
    ][
      ask n-of num-start-with-G1 nodes 
      [
        set grammar 1.0
      ]
    ]
    ; if there is more than one node from which the new grammar might spread, we pick one randomly
    let start-node max-one-of nodes [ grammar ]
    setup-biases nodes start-node false
  ]  
  with-local-randomness [
    set initial-fraction-influenced-by-minority sum [ count link-neighbors ] of nodes with [ grammar > 0.5 ] / (2 * count links )
    if visuals?
    [
      ask nodes 
      [ 
        color-by-grammar 
        size-by-degree
      ]
    ]
  ]
end 

to-report uniform-symmetric-bias-list [ len ]
  let bias-list n-values floor (len / 2) [ -0.5 + random-float 1.0 ]
  set bias-list sentence bias-list (map [ 0 - ? ] bias-list )
  if (length bias-list != len)
    [ set bias-list fput 0 bias-list ]
  report bias-list
end 

to-report random-normal-cutoff [ avg stdev xmin xmax ]
  let x random-normal avg stdev
  while [ x < xmin or x > xmax ] 
  [ set x random-normal avg stdev ]
  report x
end 

to-report normal-symmetric-bias-list [ len ]
  let stdev 10 ^ bias-stdev-log10
  let bias-list n-values floor (len / 2) [ random-normal-cutoff 0 stdev -0.5 0.5 ]
  set bias-list sentence bias-list (map [ 0 - ? ] bias-list )
  if (length bias-list != len)
    [ set bias-list fput 0 bias-list ]
  report bias-list
end 

to setup-nodes
  set-default-shape nodes "circle"
    
  create-nodes number-of-nodes
  [
    ; for visual reasons, we don't put any nodes *too* close to the edges
    setxy random-xcor * .95 random-ycor * .95
    set grammar 0.0
  ]
end 

to setup-biases [ thenodes start-node reverse-order? ]
  let bias-list false ; this will cause an error if bias-dist wasn't a valid choice.
  if (bias-dist = "flat")
  [ set bias-list n-values (count thenodes) [ global-bias ]  ]
  if (bias-dist = "uniform-symmetric")
  [ set bias-list uniform-symmetric-bias-list (count thenodes) ]
  if (bias-dist = "normal-symmetric")
  [ set bias-list normal-symmetric-bias-list (count thenodes) ]
  let nodelist [self] of thenodes
  if (bias-target = "influentials")
  [
    set bias-list sort bias-list
    set nodelist sort-by [[count link-neighbors] of ?1 < [count link-neighbors] of ?2 ] thenodes
  ]
  if (bias-target = "nearby")
  [
    set bias-list sort bias-list
    set nodelist sort-by [[__network-distance start-node links] of ?1 > [__network-distance start-node links] of ?2 ] thenodes
  ]
  if (reverse-order?) 
    [ set bias-list reverse bias-list ]
  foreach nodelist
  [
    ask ?
    [
      set grammar-bias first bias-list
      set bias-list but-first bias-list
    ]
  ]
end 

to setup-random-network
  ask nodes [ 
    ask nodes with [who > [who] of myself ]
    [
      if (random-float 1.0 < (average-node-degree / (number-of-nodes - 1) ))
      [ create-link-with myself ]
    ]
  ]
  if visuals?
  [
     repeat 40 [ do-network-layout nodes ]  
     rescale-network-to-world
  ]
end 

to setup-spatially-clustered-network
  let num-links (average-node-degree * number-of-nodes) / 2
  while [count links < num-links ]
  [
    ask one-of nodes
    [
      let choice (min-one-of ((other turtles) with [ not link-neighbor? myself ]) [ distance myself ])
      if (choice != nobody)
        [ create-link-with choice ]
    ]
  ]
  ; make the network look a little prettier
  if visuals?
  [
     repeat 10 [ do-network-layout nodes ]  
     rescale-network-to-world
  ]
end 

to setup-preferential-network [ thenodes avg-node-deg ]
  link-preferentially thenodes (avg-node-deg / 2)
  
  ; make the network look a little prettier
  if visuals?
  [
     with-local-randomness [
       layout-radial thenodes links (max-one-of thenodes [ count link-neighbors ] )
     ]
     repeat 10 [ do-network-layout thenodes ]  
     rescale-network-to-world
  ]
end 
; The parameter k is the number of edges to add at each step (e.g. k=1 builds a tree)
;  (if k has a fractional part, then we probabilistically add either floork aor floork + 1 edges)
;  k MUST be 2 or greater, otherwise there are errors!

to link-preferentially [ nodeset k ]
  let floork (floor k)
  let fractionk (k - floork)
  let nodelist sort nodeset
  let neighborchoicelist sublist nodelist 0 floork
  
  ask item floork nodelist
  [ 
    create-links-with turtle-set neighborchoicelist 
    set neighborchoicelist sentence (n-values floork [ self ] ) neighborchoicelist
  ]
  
  foreach sublist nodelist (floork + 1) (length nodelist)
  [
    ask ?
    [
      let tempneighborlist neighborchoicelist
      let mydegree floork + ifelse-value ((who > floork + 1) and (random-float 1.0 < fractionk)) [ 1 ] [ 0 ]
      repeat mydegree
      [
        let neighbor one-of tempneighborlist
        set tempneighborlist remove neighbor tempneighborlist 
        set neighborchoicelist fput neighbor neighborchoicelist
        create-link-with neighbor
      ]
      set neighborchoicelist sentence (n-values mydegree [ self ] ) neighborchoicelist
    ]
  ]
end 

to do-network-layout [ thenodes ]
   with-local-randomness [
     layout-spring thenodes links 0.3 0.8 * (world-width / (sqrt number-of-nodes)) 0.5
   ]
end 

to rescale-network-to-world
    with-local-randomness [
      let minx (min [ xcor ] of nodes)
      let miny (min [ ycor ] of nodes)
      let cw (max [ xcor ] of nodes) - minx
      let ch (max [ ycor ] of nodes) - miny
      ask nodes [ 
        set xcor (xcor - minx) / cw * (world-width - 1) + min-pxcor
        set ycor (ycor - miny) / ch * (world-height - 1) + min-pycor
      ]
    ]
end 

to go
  ask nodes [ speak ]
  ask nodes [ learn ]
;; this would be a different type of scheduling, where high degree nodes
;; are 'learning' much more quickly than the rest of the agents.
;; if we delete this stuff, also delete "learn-from" procedure down below!
;  ask links [
;    ask both-ends [
;      speak
;      learn-from other-end
;    ]
;  ]
  
  if visuals?
  [
    with-local-randomness [
      ask nodes [ color-by-grammar ]
    ]
    update-plot
  ]
  tick
end 

to size-by-degree
  set size 0.3 * sqrt (count link-neighbors + 1)
end 

to color-by-grammar
  set color scale-color yellow grammar 0 1.000000001
end 

to color-by-grammar-bias
  set color scale-color red grammar-bias -0.50000001 .500000001
end 

to-report sigmoid-func [ x nonlinearity repel-offset ]
  ; this is a sigmoid-type function [0,1] --> [0,1] with parameters:
  ;    x: input
  ;    nonlinearity: degree of nonlinearity (0 = linear, 100 = step function)
  ;    repel-offset: determines (repelling) fixed point: x' = 0.5 + repel-offset
  if nonlinearity = 100 [report ifelse-value (x < (0.5 + repel-offset)) [0.0] [1.0]]
  if nonlinearity = 0 [report x] ; linear!
  if (repel-offset < -0.5) [ set repel-offset -0.5 ]
  if (repel-offset > 0.5) [ set repel-offset 0.5 ]
  let a (nonlinearity / (100.0 - nonlinearity))
  let left-term (x * exp(a * (x - repel-offset)))
  let right-term ((1.0 - x) * exp(a * (1.0 - x + repel-offset)))
  report (left-term / (left-term + right-term))
end 

to speak
  let prob (sigmoid-func grammar categoricalness (0.0 - (global-bias + grammar-bias)))
  ifelse (probabilistic-speech?)
  [   set spoken-val ifelse-value (random-float 1.0 < prob) [ 1 ] [ 0 ]   ]
  [   set spoken-val prob   ]
end 

to learn
  if (not any? link-neighbors)
    [ stop ]
  let new-gram (learning-rate * mean [ spoken-val ] of link-neighbors) + (1 - learning-rate) * grammar 
  ifelse (new-gram > 1) 
    [ set new-gram 1 ]
    [ if (new-gram < 0) [ set new-gram 0 ] ]
  set grammar new-gram
end 
;; This procedure would be useful, if we decided to use the different update scheduling mentioned in
;; the GO procedure, wherein high degree nodes do a lot more speaking *AND* learning than other nodes.
;to learn-from [ othernode ]
;  let new-gram (learning-rate * [ spoken-val ] of othernode) + (1 - learning-rate) * grammar 
;  ifelse (new-gram > 1) 
;    [ set new-gram 1 ]
;    [ if (new-gram < 0) [ set new-gram 0 ] ]
;  set grammar new-gram
;end

to update-plot
  with-local-randomness [
    set-current-plot "Grammar State"
    set-current-plot-pen "state"
    plot mean [ grammar ] of nodes
    set-current-plot-pen "spoken"
    plot mean [ spoken-val ] of nodes
  ]
end 

to-report converged?
  ; if the chance of the out-lier node producing a minority-grammar
  ;    token in the next 10,000 time steps is safely less than 0.01%, then stop.
    if not any? nodes [ report false ]
    report ((min [ grammar ] of nodes) > (1 - 1E-8) or (max [ grammar ] of nodes) < 1E-8)
end 
;; The following several procedures are not necessary for the running of the model, but may be
;; useful for measuring the model, BehaviorSpace experiments, etc.

to-report cascaded?
  ifelse (converged? and mean [grammar] of nodes > 0.5) 
    [ report 1 ] 
    [ report 0 ]
end 

to-report cascaded90?
  ifelse (mean [grammar] of nodes > 0.9)
  [ report 1 ] 
  [ report 0 ]
end 

to-report communityA
  report nodes with [ who < (count nodes / 2) ]
end 

to-report communityB
  report nodes with [ who >= (count nodes / 2) ]
end 
;; The following procedures are not at all crucial to the model
;; I just used them to be able to repeat some interesting setups,
;; do some quick testing, etc.  
;; They should probably all be deleted at some point. ~Forrest 7/22/2008

to demo-setup
  clear-all
set number-of-nodes 100
set average-node-degree 3
set bias-dist "normal-symmetric"
set bias-stdev-log10 1
set probabilistic-speech? true
set network-type "preferential"
set categoricalness 50
set global-bias 0
set num-start-with-G1 25
set learning-rate 0.05
set visuals? true
setup 367808704
end 

to demo-setup2
; if we use robert's type of node interactions in each time step:
; seed that cascades, but only for high learning rate
setup 762758417 
set learning-rate 0.6 
end 
;to setup-one [theseed]
;clear-all
;setup theseed
;ask nodes [ set grammar 0 ]
;;ask max-n-of (new-grammar-fraction * number-of-nodes) nodes  [ count link-neighbors ] [ set grammar 1 ]
;ask max-n-of 1 nodes  [ count link-neighbors ] [ set grammar 1 ]
;;print mean [ count link-neighbors ] of nodes with [ grammar > 0.5 ]
;ask nodes [ color-by-grammar ]
;end

to demo-setup-one
set learning-rate 0.13
set visuals? true
set average-node-degree 3
set probabilistic-speech? true
set categoricalness 50
set bias-target "influentials"
set bias-dist "uniform-symmetric"
set number-of-nodes 200
set network-type "preferential"
set bias-stdev-log10 1
set global-bias 0
set num-start-with-G1 2
set start-target "influentials"
set start-target-rank 0
  setup 1324523410
;  for N=300, out of 1000 runs, these 3 seeds cascaded upwards:  
;  SEED: 6410918, 1256638123,  685548220
end 

to-report quicktest
  setup new-seed 
  repeat 1000 [ 
    go 
    if converged? [ report mean [grammar] of nodes]
  ]
  report mean [ grammar ] of nodes
end 

to demo-surprising-case
  set bias-stdev-log10 1
  set number-of-nodes 300
  set start-target "none"
  set network-type "preferential"
  set learning-rate 0.05
  set bias-target "nearby"
  set categoricalness 50
  set global-bias 0
  set bias-dist "normal-symmetric"
  set visuals? true
  set average-node-degree 4
  set num-start-with-G1 3
  set probabilistic-speech? true
  set start-target-rank 0
  setup 3543911878112519 
end 

There is only one version of this model, created almost 14 years ago by Forrest Stonedahl.

Attached files

No files

This model does not have any ancestors.

This model does not have any descendants.