Volunteers motivation
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This model simulates the effect of volunteers’ motivation on their work. Its aim is to demonstrate that the volunteers will not work forever, and that various motivational measures (e.g. setting small tasking radii or assuring the volunteers are motivated at various occasions) can help to get the work done while keeping the volunteers satisfied. The model can be run in two modes: • “Run1” button will execute one “event” and stop when all issues were resolved • “Run” button will execute next event as soon as all the issues were resolved The “Run1” can be used to investigate the tasking process and motivation changes within a single event, whereas the “Run” shows how the volunteers’ motivation changes over the longer periods
WARNING
Model is very simple and by no means calibrated. Some of the intended functionality has not been implemented (yet). The part which is implemented has been tested and works as designed. The remaining functionality is marked with "TODO" in this documentation.
HOW IT WORKS
Volunteers walk around and report whenever they see an anomaly. Depending on the value of "Nr-confirmations", one or more reports are required until the anomaly is considered "resolved".
- Nearby volunteers will be directed towards already discovered anomalies
- If "error-perc" > 0, some of the reports will be false.
- If "Nr-consensus" > 1, the anomaly finding has to be confirmed (reported more than once, by different volunteers.
MOTIVATION
Volunteers’ motivation is crucial for its behavior
All volunteers start with init-motivation. The motivation will grow whenever something "nice" happens and while the volunteer is resting. Likewise, the motivation will fall while the volunteer is moving as well as in the case something "bad" happens.
- Volunteers which reach the Init-motivation will start moving spontaneously and look for the issues to report. They will lose **Mmoving-loss_ of motivation on each tick and stop searching as soon as they reach the motivation = **Treshold-selfmotivation.
- Volunteers can be tasked to perform some job if their motivation is higher or equal Treshold-tasking_motivation. Once they accept the task, they will always finish it, even if their motivation falls below zero while doing so.
- The only task type implemented in this model is "confirm observation" which sends the volunteer towards some (nearby) discovered but unresolved issue. Maximal tasking distance is determined by R-task global variable. Automatic R-task adjustment has been disabled in this model.
- Volunteers will receive a motivation boost of M-task whenever they are tasked, M-discover whenever they report an observation and M-consensus whenever their observation is confirmed.
- However, they will also receive a negative M-consensus whenever their observation differs from consensus. That is, a volunteer contributing a correct observation will lose motivation if the consensus is incorrect, and a volunteers contributing incorrect observation will lose motivation if the consensus is correct. (TODO: this is not implemented yet!)
LEARNING BY DOING (TODO learning not implemented yet!)
All volunteers make mistakes. Initially, the probability of making a mistake is Init-error, but they can learn by reporting and waiting for the consensus result. If the consensus is positive, they will experience a positive learning effect and their error probability will fall by 1%. If the consensus is negative, they will experience a negative learning effect and their error probability will rise by 1%.
The learning effect can be disabled by setting the Learning? switch to "Off" position.
HOW TO USE IT
Model offers various buttons and sliders, which can be used to control its work. Following control elements are inherited from Taskable Volunteers 01" model:
- Reset, Run and Step 1 buttons have the usual meanings.
- Nr-issues determines the number of issues which need to be discovered.
- Nr. Volunteers allows adjusting the number of volunteers. (In this model, all volunteers are taskable.)
No-confirmations determines the number of required report confirmations
Size-world: Number of patches in x and y direction. The larger the world, the further the volunteers will have to move
R-discover determines the distance from which the issues can be spotted by volunteers.
R-task is the geo-fencing radius for taskable volunteers. Volunteers will never be asked to confirm findings that are further than R-task away from their current location.
Next group of control elements is specific to this model and used to adjust the motivation-related model parameters:
- Init-motivation: initial motivation of the volunteers.
- Tasking_threshold-motivation: minimal motivation which is necessary to accept a task
- M-moving-loss: how many motivation points will a volunteer loose with every step while working?
- M-idle-gain: how many motivation points will a volunteer gain with every step while resting?
- M-discover: how many motivation points will a volunteer gain when reporting an observation?
- M-task: how many motivation points will a volunteer gain when tasked?
- M-consensus: additional motivation gain or loss which occurs when a consensus is reached.
- M-ego?: motivation boost for top-notch volunteers (e.g. top 5-10 volunteers) due to the fact they want to keep the top position.
Last two control elements on the left-hand side are related to errors:
- Init-error: initial probability for posting wrong report.
- Learning?: this switch controls the "learning by doing". Off means "no learning effect". (TODO: not implemented yet)
Finally, the three switches on the right-hand side allow output or visualization of some additional information - mainly useful for debugging and understanding how model works.
THINGS TO NOTICE
This model demonstrates that the volunteers need to be motivated in order to do some work. The more motivation points they receive, the more work they will perform. In the real world situation, avoiding to exhaust volunteers during the exercise, as well as assuring that they leave the exercise with a high motivation is extremely important because the volunteers learn from the past experiences (not implemented here) and the volunteers which experienced the episodes of extremely low motivation during the exercise as well as those that leave home with low motivation are less likely to help in the next event.
In the current model, the level of satisfaction can be kept high by following measures:
1) High Self-motivation and tasking motivation threshold. 2) Low M-moving-loss 3) High M-resting-gain 4) High M-discover and M-task values 5) High M-consensus value, unless the probability of false reports (black dots) is high 6) The "top-notch" volunteers can also be kept happy by boosting their ego (TODO). 9) The M-bier parameter simulates the effect of the motivation-boosting measures which happen after the end of the crisis event. The name has been chosen to suggest the “invitation to a post-crisis party”, which is one of the possible motivation-boosting post-crisis activities.
Relation to the “real world”
My primary motivation for introducing these parameters (and possibly other in the future) is to “remind” the volunteer managers of the motivation possibilities they may have at their disposal and make them think of the way to implement these in the real life.
Of course, the model is not calibrated, and the motivation boost on e.g. receiving or resolving a task will (in the real world) very much depend on the way such events are presented to the users. The common wisdom tells that thanking the users for their good work is usually better than not doing so, but spamming the users with huge number of messages would be contra-productive.
Last, but not the least important, the real issue these volunteer managers have to handle is the long-term motivation. This issue in not directly addressed by the current model, except in the sense of recommendation that the managers have to assure the volunteers are highly motivated at the end of the exercise in order to participate in the next one.
COLOR CODING
The model uses shapes and color coding to visualize the situation:
All Volunteers are shaped as people.
- Resting volunteers are black and small (size=1).
- Currently tasked volunteers are orange and large (size=2).
- Volunteers which are moving around due to self-motivation are blue and size=1.5.
Issues are shaped as targets.
- Undiscovered issues are red.
- Discovered issues are orange.
- Correctly resolved issues are green.
- Incorrectly resolved issues are black.
Even more information can be shown by enabling the three switches at the bottom of the user interface.
- The first one (debug?) enables writing of debug messages.
- The second one (show-observations?) makes the observation links between reporter and the issue visible.
- The last one (show-tasks?) does the same for the task links.
The switches can be turned on and off at any time, but their effect will be only seen for the new links, while the already existing links will not be affected.
THINGS TO TRY
Play with the tasking radius.
If the tasking radius is too small, fewer volunteers will be tasked, but they will finish the task faster. On the other hand, the volunteers which have received the far-away task will spend a lot of time finishing this task. In addition, such volunteers can become exhausted on a task (vmotivation < 0). _Exhausted volunteers do not recover until the end of the current event and consequently also loose inherent motivation for the next run. Exhausting the volunteers is very bad.
Play with Nr-confirmation
Confirmations are used to improve the quality of observations. Due to the way "false reports" are calculated, odd numbers of confirmations lead to unexpected results. Try setting the Nr-confirmations to one and two to observe the effects. What happens if the Nr-confirmations is set to zero?
EXTENDING THE MODEL
This model is quite simple. Here are some things that could be improved:
IMPROVED MOTIVATION FUNCTIONS
Motivation is a complex beast, and the model could be improved by implementing more realistic motivation functions. For one thing, the motivation boosts should become less and less efficient over time if used too often. Another aspect is the motivation change between events. Even if we do boost the volunteers’ motivation with a bier party after the end of the event, the effects will wear off while waiting for the next event. And the volunteers will soon be fed up with volunteering if we ask their help too often. None of this is modelled currently.
CALIBRATION
Eventually, the model predictions will have to be tested and calibrated against the data obtained in experiments with real volunteers,
TRUST/REPUTATION
In adition to learning by doing, the volunteers could also gain reputation by reporting. They start with some Init-reputation and Whenever their report is aligned with the consensus (even if the consensus is wrong or not?), their reputation will grow. Likewise, the reputation will diminish whenever they are part of the team which wins the consensus lottery.
In a worst case, a set of" bad reporters" could even gain reputation while delivering incorrect reports and ruin the whole system.
MORE TASK TYPES
The only task implemented by this model is "confirm observation". Other interesting tasks are for instance:
- Patrol area: ask volunteer to scan an area (suquare is fine). Ideally, the volunteer should scan this area in some more eficcient way than random walk.
- Group scan: a group of volunteers quickly scans an area by walking in a line.
- TBD: other???
NETLOGO FEATURES
(interesting or unusual features of NetLogo that the model uses, particularly in the Code tab; or where workarounds were needed for missing features) * uses "breeds" to differ between various types of agents. * uses
RELATED MODELS
See other "Volunteers " models by the same author.
HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
- Havlik, D. (2015). Volunteers Motivation NetLogo model. http://modelingcommons.org/browse/one_model/4241
- Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
COPYRIGHT AND LICENSE
Copyright 2015 Denis Havlik
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
To inquire about commercial licenses, please contact Denis Havlik at denis@havlik.org.
CREDITS AND REFERENCES
This model is an indirect consequence of my work in ENVIROFI and DRIVER EU projects. In these projects we have developed some mobile applications for crowdsourcing and crowdtasking, which got me interested in the factors which govern the behaviour of the volunteers.
Comments and Questions
;;;;;;;;;;;;;;;;;;;;;;;;;;; ; globals and definitions ; ;;;;;;;;;;;;;;;;;;;;;;;;;;; globals [Continue?] breed [volunteers volunteer] breed [issues issue] ; report is reserved word, so we'll use observation. directed-link-breed [ observations observation ] directed-link-breed [ v_tasks v_task ] ; *motivation* is crucial for the emodel ; - at motivation >= inherent-v_motivation, the volunteer will start moving arround on its own. In the first round, this is the same as init-motivation. ; - at motivation <= Treshold-tasking_motivation (*), the volunteer will stop and rest, unless it's currently on a task. ; - at motivation >= Treshold-tasking_motivation the volunteer can accept tasks. ; (*) TODO: I guess that this needs own treshold.. volunteers-own [v_state v_motivation v_moved v_error v_trust inherent-v_motivation] ; i_resolved? is set to true when we have enough reports. issues-own [ i_resolved? i_generated i_discovered i_consensus] ; not sure what we need in this link... observations-own [timestamp o_correct?] ;;;;;;;;;;;;;;;;;;;; ; setup procedures ; ;;;;;;;;;;;;;;;;;;;; to setup clear-all setup-world setup-volunteers watch volunteer 0 follow volunteer 0 setup-issues reset-ticks end ; randomly add some taskable volunteers to the world to setup-volunteers set-default-shape volunteers "person" ; if not is-number? Nr-taskable [ set Nr-taskable 10 ] create-volunteers Nr-volunteers [ setxy random-xcor random-ycor set v_state "resting" set inherent-v_motivation init-motivation set v_motivation inherent-v_motivation set v_moved 0 set v_error Init-error set v_trust 1 set color violet set size 1.5 ] end ; randomly add some issues to the world ; ; issues have a generation time & later also a discovery time to setup-issues set-default-shape issues "target" create-issues Nr-issues [ setxy random-xcor random-ycor set i_generated 0 set i_consensus 0 set i_resolved? false set color red set size 1.5 ] end ; initially all patches are just grey = unknown. to setup-world resize-world 0 Size-world 0 Size-world set-patch-size 900 / Size-world ask patches [ set pcolor grey ] end ;;;;;;;;;;;;;;;;;;;; ; Help Procedures ; ;;;;;;;;;;;;;;;;;;;; ; altering the volunteers state to alter-v_state [new_state] let old_state v_state set v_state new_state if new_state = "resting"[ set color black set size 1 stop ] if new_state = "searching"[ set color violet set size 1.5 stop ] if new_state = "tasked"[ ; tasking motivation boost! alter-v_motivation "new-task" set color orange set size 2 stop ] ; we should never come here! set v_state old_state show "Don't know how to change state to:" print new_state end ;altering the volunteers motivation. ; at the moment it's just linear in time or step-wise with fixed steps. ; S-function for demotivation and saturating steps with diminishing return would be more realistic. ; e.g. exponentialy diminishing return with (N-events) + s-function(t) for return to normall for the single-shot motivation boosts? to alter-v_motivation [reason] ; these are called on every tick if reason = "resting" [ set v_motivation v_motivation + M-idle-gain stop ] if reason = "moving" [ set v_motivation v_motivation - M-moving-loss stop ] ; TODO: different rate of losing motivation when tasked? if reason = "tasked" [ set v_motivation v_motivation - M-moving-loss stop ] ; these are single-shot motivation boosts and sinks if reason = "new-task" [ set v_motivation v_motivation + M-task stop ] show "Don't know how to change motivation by:" print reason end ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Procedures governing the world development ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; to go assign-tasks move-volunteers report-issues manage-issues ;if count issues with [ i_resolved? = true ] = Nr-issues [ next-event ] ;stop unless "Continue" is true, but pressing on the "Run1" again will start the next event. if count issues with [ i_resolved? = true ] = Nr-issues [ next-event if (not Continue?) [stop] ] tick end ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; next-event setup procedure ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; to next-event ask volunteers [ setxy random-xcor random-ycor set v_state "resting" ; if my motivation is higher at the end of the run than at the start of the run, "hui", if not "pfui"! ; this could (and should) be improved... set inherent-v_motivation inherent-v_motivation + ifelse-value (v_motivation + M-bier > init-motivation) [1][-1] set v_motivation inherent-v_motivation set v_moved 0 ] ask issues [die] setup-issues clear-links ; do we need clear/kill/reset other things? ticks? plots? end ; volunters can be tasked, e.g. to quality-assure the reports to assign-tasks ; Note: if Nr-confirmations = N, we need N+1 observations to resolve the issue, not N! let open_issues issues with [i_discovered > 0 and (count in-link-neighbors) <= Nr-confirmations] ; no discovered but unresolved issues, let's get out of this... if count open_issues = 0 [ if debug? [ write "no open issues" ] stop ] let free_volunteers volunteers with [not any? out-v_task-neighbors and v_motivation >= Treshold-tasking_motivation] ; no free volunteers, let's get out of here. if count free_volunteers = 0 [ ; we are overbooked, let's lower the tasking radius? if Adjust-R-task? and R-task >= R-discover [set R-task R-task - 1] if debug? [ write "no free volunters" ] stop ] ask free_volunteers [ let nearest_issue min-one-of (issues with [ i_discovered > 0 and not in-observation-neighbor? myself and (count in-link-neighbors) <= Nr-confirmations and distance myself <= R-task ]) [distance myself ] ifelse is-issue? nearest_issue [ ; exhausted volunteers are BAD. We shall lower the tasking radius and we shall not task this volunteer now. ; This is a bit of a chreating though. .-) create-v_task-to nearest_issue [ if show-tasks? [ set color orange ] if debug? [ write "created task" show self ] ] ; the volunteer is tasked now, let's show it! alter-v_state "tasked" ] [ ; wth? Maybe the tasking radius is too small? But don't do this too fast! if Adjust-R-task? and R-task < Size-world / 2 [set R-task R-task + (1 / count free_volunteers)] ] ] ; Something like this could be used to dynamically adjust the tasking radius. ;set free_volunteers volunteers with [not any? out-v_task-neighbors] ;let c_free-volunteers count free_volunteers ;let c_open-issues count issues with [i_discovered > 0 and (count in-link-neighbors) <= Nr-confirmations] ; let's play with tasking radius ;if c_free-volunteers = 0 and R-task > R-discover [set R-task R-task - 1 ] ;if c_free-volunteers > 0 and R-task < max-pxcor and c_open-issues > 1 [set R-task R-task + 1 ] end ; tasked volunteers move towards their task or at random if no task defined and if they are motivated enough. to move-volunteers ask volunteers [ ; If we are resting, rest until re-motivated if v_state = "resting" and v_motivation > 0 [ alter-v_motivation "resting" if v_motivation >= inherent-v_motivation [ alter-v_state "searching" ] stop ] ; If we are "searching" (not tasked), do the random walk. if v_state = "searching" [ right random 60 - 30 alter-v_motivation "moving" forward 1 if v_motivation < Treshold-selfmotivation [ alter-v_state "resting" ] stop ] ;If we already have a task, let's go for it! if any? out-v_task-neighbors [ face one-of out-v_task-neighbors forward 1 alter-v_motivation "tasked" stop ] ; if we are here, it means that the v_state is "tasked" but we have already resolved the task set v_state "searching" ] end ; report observations on nearby issues to report-issues ask volunteers [ ; "myself" refers to a volunteer which initiated the loop, not to issue! ; issue would be reffered to as "self" in own context. let this_reporter self ; which issues are in vicinity? ask issues in-radius R-discover [ ; report only those issues we haven't reported already! if in-observation-neighbor? myself [ stop ] ; delete all tasks from caling turtle to this issue ask my-in-v_tasks with [ other-end = this_reporter ] [ ask this_reporter [ set color violet ;set size 1.5 ] if debug? [ write "deleting resolved task:" print self ] die ] ; don't report if already enough reports. ; This is kind-or optional, results might even improve if we don't do this. if (count in-observation-neighbors) > Nr-confirmations [stop] ; is the observation correct? let correct? ifelse-value (Init-error < random 100) [true] [false] create-observation-from myself [ if show-observations? [ ; actually i should set it to black if the observation is wrong... set color ifelse-value ([i_discovered] of myself > 0) [green][red] if not correct? [ set color color - 3] ] set o_correct? correct? set timestamp ticks if debug? [print self] ] ; issue is discovered! if i_discovered = 0 [set i_discovered ticks] set color orange ; some of the reports will be wrong. We simplify this to "negative reports" here. set i_consensus i_consensus + ifelse-value correct? [1][-1] ] ] end to manage-issues ; issues are considered resolved if we get enough reports on it. ; thus we can have "false positives" here too. ask issues with [ not i_resolved? and (count in-observation-neighbors) > Nr-confirmations ] [ if debug? [ write "No. neigbours:" print count in-observation-neighbors ] set i_resolved? true ; if needed, tell the volunteers that their help isn't needed anymore! ask my-in-v_tasks [ if debug? [ write "deleting obsolete tasks (2):" print self ] ask other-end [ alter-v_state "searching" ; TODO: we should reverse this if the o_correct? of the observation corresponding to this task is false! ifelse [i_consensus] of other-end > 0 [ alter-v_motivation "+harmony" ] [ alter-v_motivation "-harmony" ] ] die ] ifelse i_consensus > 0 [ set color turquoise ] [ set color black ] ] end
There is only one version of this model, created over 10 years ago by Denis Havlik.
Attached files
File | Type | Description | Last updated | |
---|---|---|---|---|
Volunteers motivation.png | preview | Preview for 'Volunteers motivation' | over 10 years ago, by Denis Havlik | Download |
This model does not have any ancestors.
This model does not have any descendants.
Denis Havlik
How to model the volunteers motivation? (Question)
My first try at modeling the volunteers motivation is here. It shows the main features, such as "telling the volunteers that they did a great job will motivate them" and "long time spent volunteering is bad for motivation", but it's far from perfect. The question is: what to target next?
Posted over 10 years ago
João Antônio
File Error (Question)
It is not possible to open the zip file. An error occurs. Is it possible to upload it again?
Posted over 9 years ago