ConversationWorld

No preview image

1 collaborator

Default-person Nathan Couch (Author)

Tags

(This model has yet to be categorized with any tags)
Visible to everyone | Changeable by everyone
Model was written in NetLogo 6.0-M6 • Viewed 442 times • Downloaded 37 times • Run 0 times
Download the 'ConversationWorld' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This NetLogo model implements the model of convention formation specified in Couch (2016). In this model, agents wander the world, examine the world state, and then attempt to communicate that world state with each other. This mapping is accomplished via an associative mapping that is updated in response to the success or failure of the attempted communication.

HOW IT WORKS

The model is extremely simple. Agents each possess a mapping from a set of meanings to a set of messags. They agents wander the world and attempt to communicate with each other about the state of the world. They then update their mappings to better approximate the mappings of other agents in the world.

SETUP

The principle parameters varied in the model are the size and structure of the meaning and message languages.

The model opens a number of child models. Each child model consists of a pair of artficial neural networks, one which maps sentences of the meaning language to sentences of the message language, and one which performs the reverse mapping. The size and structure of the networks is set by a number of parameters in the parent model. Relevant parameters are:

  1. The number of nodes in the input layer, which is either the number objects and relations in the language, for the meaning to message network, or the number of words in the message language, for the message to mapping network.
  2. The number and size of middle layers
  3. The number of nodes in the output layer, which is set in the same way that the input layer's size is set.

In addition, each patch is given a meaning from the set of possible meanings in the language. See Couch (2016) for details.

GO

At each time step, the agents move randomly around the world and examine the meaning in the patch that they are standing on. They then map that meaning to a message, pick another agent to send the message to. That agent the decodes the message, and indicates whether the meaning that they derived matches the one that the original agent started with. The results of this conversation is then stored in the agents memory, the size of which is controlled by the user.

After this, each agent then iterates over the contents of its memory and uses those memories to update its mappings. The details of the mapping and learning process are in ConversationWorldBrain.nlogo.

HOW TO USE IT

Fairly simple: pick the parameter settings that you wish to use, hit setup and go.

Some things to note: higher numbers of agents, larger languages, and larger or more hidden layers makes the model run very, very slowly. These effects are multiplicative: the slow down for making more agents with larger brains is larger (slightly!) then the increase for either individually.

Additionally, the model appears to run best when there are a moderate (2-3) number of hidden layers, and when each layer is approximantly as large as the others. That is, the model performs poorely when the sizes of the meaning or message languages are far apart, or when the size of the hidden layers is very different from the size of the input and output layers.

THINGS TO TRY

There's just gobs of parameters to set here, some of which I haven't documented here. So I guess one thing would be to figure them out for yourself!

More seriously, it may be interesting to structure the agent's experience of the world and see if that makes it easier or harder for them to converge on a common language. For instance, one idea that I had but did not implement was to first only write meanings consisting of single objects to the world, have the agents learn a mapping for that, and then slowly increase the complexity of the meanings by adding in conjunctive and relational meanings.

Another idea would be to change the code that dictates the way that meanings and messages are mapped onto each other. Perhaps agents would perform better if their mappings were accomplished through some kind of Bayesian process, or if the mapping was non-associative. I have tried to make the code the controls how agents map meanings to messages (and vis-versa) and update their mappings as encapsulated as possible, though some mapping regimes might require more changes than others.

Or, you could try to implement some kind of instruction system, where one agent attempts to teach a mapping to all the others, who then go off and communicate with each other. I haven't though much about the different ways in which you could structure the agent's attempts to talk to each other and teach each other mappings, but I bet there's just gobs of ways that it could go.

NETLOGO FEATURES

This model uses LevelSpace to give each agent a brain that performs the mappings.

RELATED MODELS

A LOT of this code is ripped from the Artificial Neural Net - Multilayer model found in the modeling library and the Sheeps with Brains model found on Modeling Commons.

CREDITS AND REFERENCES

Couch (2016) Untitled (in prep)

Barr, D. (2004). Establishing conventional communication systems: Is common knowledge necessary? Cognitive Science, 28(6), 937–962. http://doi.org/10.1016/j.cogsci.2004.07.002

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

extensions [ls]

globals [
 meaning-language
 message-language
 objects
 relations

 all-meanings

 meaning-inputs
 message-inputs

 attempt-list
 accuracy-list
]

patches-own [ patch-meaning ]
turtles-own [ brain memory last-meaning last-message success?]

to setup

  clear-all
  ls:reset

  set attempt-list []
  set accuracy-list []

  ;; get the meaning and message languages set up
  generate-languages

  ;; give each patch a message
  ask patches [
    ;; there is a chance that the patch contains a relation, otherwise
    ;; there is an equal chance that it contains one or two objects
    ifelse (random 100) < relation-prob and num-relations != 0
      [ set patch-meaning generate-meaning 2 1 ]
      [ifelse num-objects > 1
        [ set patch-meaning generate-meaning (one-of (list 1 2)) 0 ]
        [ set patch-meaning generate-meaning 1 0 ]]

      ;; color the patch to weakly indicate what meaning it contains
      color-patch

      ]

  crt num-turtles [
    set color red
    setxy random-xcor random-ycor
    setup-brain
  ]

  reset-ticks
end 

to setup-test

  clear-all
  ls:reset

  set attempt-list []
  set accuracy-list []

  ;; get the meaning and message languages set up
  generate-languages

  ;; give each patch a message
  ask patches [

    ;; there is a chance that the patch contains a relation, otherwise
    ;; there is an equal chance that it contains one or two objects
    ifelse (random 100) > relation-prob and num-relations != 0
      [ set patch-meaning generate-meaning 2 1 ]
            [ifelse num-objects > 1
        [ set patch-meaning generate-meaning (one-of (list 1 2)) 0 ]
        [ set patch-meaning generate-meaning 1 0 ]
  ]]

  crt num-turtles [
    set color red
    setup-brain
    face patch-at min-pxcor min-pycor
  ]

  reset-ticks
end 

to go

  set attempt-list []


  ask turtles [
    move
    talk
    update
  ]

  ;; append to the end of the list the mean accuracy of the model at this tick
  set accuracy-list lput mean attempt-list accuracy-list
  tick
end 

;; TURTLE PROCEDURES

to setup-brain

 ;; sets up the turtle's brain

 set memory []
 (ls:load-headless-model "ConversationWorldBrain.nlogo" [ set brain ? ])
 ls:set-name brain (word "Brain of " self)
 (ls:ask brain [setup-brain ?1 ?2 ?3 ?4 ?5 ?6 ?7] meaning-language middle-layer num-middle-layers message-language meaning-to-message-iterations 10 learning-rate)
end 

to-report map-meaning [p-meaning]

  ls:let inputs1 map [member? ? p-meaning] meaning-language
  report ls:report brain [ apply-bools1 inputs1 ]
end 

to-report map-message [a-message]

  ls:let inputs2 map [member? ? a-message] message-language
  report ls:report brain [ apply-bools2 inputs2 ]
end 

to update

     ;; send the contents of memory to child model to improve the mapping
     (ls:ask [brain] of self [ update-mappings ?] memory )
end 

to move

  ;; turn and move a random amount
  rt (random 60) - 30
  forward 1
end 

to talk
  ;; pick someone to talk to
  let interlocutor one-of other turtles
  if (interlocutor != nobody) [

    ;; map the meaning to a message
    let message ( map-meaning [patch-meaning] of patch-here )

    ;; give the message to the other turtle and have them interpret it
    let recieved-meaning [map-message message] of interlocutor

    ;; check if there is a meaning mismatch
    ifelse recieved-meaning != map-meaning [patch-meaning] of patch-here
      [ set success? False ]
      [ set success? True  ]

    ;; encode the present interaction in memory
    set last-meaning map [member? ? [patch-meaning] of patch-here ] meaning-language
    set last-message [map-meaning map [member? ? [patch-meaning] of patch-here ] meaning-language ] of interlocutor
    set memory lput (list last-meaning last-message) memory

    ;; if the contents of memory exceed its capacity, forget the oldest thing in memory
    if length memory > memory-capacity
      [ set memory but-first memory]

    ;; recorde whether the interaction was succesful
    ifelse (success?) [set attempt-list lput 1 attempt-list ][set attempt-list lput 0 attempt-list ]
  ]
end 

;; PATCH PROCEDURES

to-report generate-meaning [ num-ob num-rel ]

  ;; helper function to generate messages
  let p-meaning ( list (n-of num-ob objects) (n-of num-rel relations))
  report flatten-list p-meaning
end 


;; GENERAL PROCEDURES

to generate-languages

  ;; sets up the language and meaning models
  generate-meaning-language
  generate-message-language
  set all-meanings generate-all-meanings
end 

to generate-meaning-language

  ;; generates the sets of objects and relations, then combines them into a single list.
  set relations map [ word "r" ? ] (n-values num-relations [?])
  set objects   map [ word "o" ? ] (n-values num-objects   [?])
  set meaning-language flatten-list (list objects relations)
end 

to generate-message-language

  ;; generates a list of words in the language
  set message-language map [ word "w" ? ] (n-values num-words [?])
end 

to-report generate-message

  ;; convience function to generate messages for the simple
  ;; brain used in the non-levelspace version of this model
  let word1 one-of message-language
  let word2 one-of message-language
  let word3 one-of message-language
  report (list word1 word2 word3)
end 

to-report generate-all-meanings

  ;; generates all possible meanings with the present language
  let two_meanings cartesian-product objects objects
  let three_meanings cartesian-product two_meanings relations
  report reduce sentence ( list objects two_meanings three_meanings )
end 

to-report cartesian-product [ list1 list2 ]

  ;; produces a list of all possible pairs where the first element of each pair
  ;; comes from the first list and the second element comes from the second list
  report reduce sentence map [ cartesian-helper ? list2 ] list1
end 

to-report cartesian-helper [ element list2 ]

  ;; required by cartesian-product to work well. This could also be used to generate
  ;; a version of cartesian-product that takes an arbitrary number of lists, but since
  ;; I only need it to work on two lists I'm sparing the effort.
  report map [ ( list element ?) ] list2
end 

to-report flatten-list [ lst ]
   ; flattens nested lists to a single list.
   if (reduce [?1 or is-list? ?2] fput false lst) [
     set lst reduce [sentence ?1 ?2] lst
     set lst flatten-list lst
   ]
   report lst
end 


;; Aesthetic functions

to color-patch

  ;; colors the patches so that the color and hue of the patch kiiinda indicates the meaning on it
  let vec map [member? ? patch-meaning ] meaning-language
  set vec map [ifelse-value ? [1][0]] vec
  set vec sum map [(5 * ? * (item ? vec)) + (50 * ?) ] n-values length vec [?]
  set pcolor vec
end 

;; REPORTERS

to-report windowed-accuracy [window-n]

  ;; indexes the last WINDOW members of accuracy-list and takes the mean
  ifelse window-n > length accuracy-list
    [ report 0 ]
    [ report mean map [item ? reverse accuracy-list] n-values window-n [?] ]
end 

to-report mean-attempt-list

  ifelse length attempt-list > 0
    [ report mean attempt-list ]
    [ report 0 ]
end 

There is only one version of this model, created over 9 years ago by Nathan Couch.

Attached files

File Type Description Last updated
AMB-FinalPaper.docx word The Couch (2016) referred to in the documentation. over 9 years ago, by Nathan Couch Download
ConversationWorldBrain.nlogo extension The code for the child models for this model. over 9 years ago, by Nathan Couch Download

This model does not have any ancestors.

This model does not have any descendants.