David Hales (2002)
Group Reputation Supports Beneficent Norms
Journal of Artificial Societies and Social Simulation
vol. 5, no. 4
To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary
<https://www.jasss.org/5/4/4.html>
Received: 6-Jul-2002 Accepted: 30-Sep-2002 Published: 31-Oct-2002
Figure 1. Agents (shown as a black circle) can see and move within one cell of the von Neumann neighbourhood (VNN) - see (a). However, they can smell for food within two cells of the VNN - see (b). |
If an agent can not see an adjacent free food item or smell a more distant one then it will consider any possible attacks it may make on other agents that possess food. If an adjacent cell (VNN) containing an agent eating a food item is seen then an agent may decide to attack and take possession of the food item. When deciding to attack the agent refers to an attack "strategy". Each agent is allocated a strategy from one of three:
Table 2: The different strategies agents may use | |
Strategy | Description |
Blind | Attack any adjacent agent holding food |
Strategic | Only attack a food carrier if they are weaker |
Normative | Respect the "ownership" norm. Only attack a food carrier if not "owner" of food and weaker. |
NormRep | As "Normative" but store bad reputation (agents attacking owners) and do not respect the ownership norm when dealing with those with bad reputation |
NormRepCom | As "NormRep" but additionally, communicate reputational information to neighbours with good reputation. |
LOOP 50 times Select an agent (A) at random from the population (with replacement) Activate agent - (agent (A) selects and executes one action): IF appropriate - receive reputational information from all neighbours IF current cell contains food then IF food prepared then {EAT-FOOD} IF food picked-up then {PREPARE-FOOD} IF food not-picked-up then {PICKUP-FOOD} END IF IF free food item is visible in neighbourhood {MOVE} to food item. IF a food item can be smelled two cells away then {MOVE} towards it. IF an agent holds a food item one cell away in neighbourhood then IF current strategy allows then {ATTACK} END IF IF any neighbouring cells are free then select one at random and {MOVE} No other actions are possible so {PAUSE} END LOOP |
Figure 2. A pseudo-code outline algorithm of what happens in a single cycle. Note that the actions (in curly brackets {}) indicate the selected actions by the agent. Implied in the algorithm is that each action is followed immediately by and "exit" which shifts control to the END LOOP line (ready for the next iteration). In all case where several options exist (i.e. were the agent can see several free food items or several agents to attack) a random choice is made. |
Table 3: Homogenous population results | ||||||
(a) Results from Conte et al (1995) | ||||||
Strategy | Str | st.dev | Var | st.dev | Agg | st.dev |
Blind | 4287 | 204 | 1443 | 58 | 9235 | 661 |
Strategic | 4727 | 135 | 1775 | 59 | 4634 | 248 |
Normative | 5585 | 27 | 604 | 41 | 3018 | 76 |
(b) Replication results | ||||||
Strategy | Str | st.dev | Var | st.dev | Agg | st.dev |
Blind | 3863 | 121 | 1652 | 52 | 15943 | 874 |
Strategic | 4134 | 110 | 1880 | 50 | 5120 | 239 |
Normative | 4451 | 26 | 479 | 30 | 940 | 32 |
Figure 3. Results from table 3 shown in graphical form. The results denoted (a) are the original results; (b) show the replication results |
Table 4: Mixed population results | ||||||
(a) Results from Conte et al (1995) | ||||||
Strategy | Str | st.dev | Var | st.dev | Agg | st.dev |
Blind | 4142 | 335 | 1855 | 156 | 4686 | 451 |
Strategic | 4890 | 256 | 1287 | 102 | 2437 | 210 |
Blind | 5221 | 126 | 1393 | 86 | 4911 | 229 |
Normative | 4124 | 187 | 590 | 80 | 1856 | 74 |
Strategic | 5897 | 85 | 1219 | 72 | 3168 | 122 |
Normative | 2634 | 134 | 651 | 108 | 2034 | 71 |
(b) Replication results | ||||||
Strategy | Str | st.dev | Var | st.dev | Agg | st.dev |
Blind | 3586 | 282 | 1744 | 182 | 7993 | 569 |
Strategic | 4369 | 267 | 1701 | 131 | 2941 | 254 |
Blind | 5051 | 116 | 1472 | 111 | 7365 | 266 |
Normative | 3037 | 144 | 491 | 69 | 363 | 41 |
Strategic | 5384 | 96 | 1481 | 109 | 3800 | 164 |
Normative | 2800 | 136 | 482 | 84 | 320 | 36 |
Figure 4. Results from table 4 shown in graphical form. The results denoted (a) are the original results; (b) show the replication results. Notice that since these experiments use mixed populations statistics for each strategy group are given separately for comparison |
Table 5: Reputation and communication results | ||||||
(a) Results from Castelfranchi et al (1998) | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 5973 | 89 | 1314 | 96 | 3142 | 140 |
NormRep | 3764 | 158 | 631 | 101 | 1284 | 59 |
Strategic | 4968 | 309 | 2130 | 108 | 2417 | 227 |
NormRepCom | 4734 | 301 | 737 | 136 | 2031 | 253 |
(b) Replication results | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 5329 | 106 | 1563 | 116 | 3733 | 182 |
NormRep | 2870 | 152 | 496 | 75 | 379 | 58 |
Strategic | 4317 | 311 | 2299 | 113 | 2890 | 295 |
NormRepCom | 3880 | 321 | 711 | 152 | 1489 | 273 |
Figure 5. Results from table 5 shown in graphical form. The results denoted (a) are the original results; (b) show the replication results. Notice that since these experiments use mixed populations statistics for each strategy group are given separately for comparison |
Table 6: Partitioned into 10 groups (random) | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 4926 | 209 | 1920 | 172 | 3421 | 208 |
NormRep | 3116 | 210 | 1195 | 129 | 1654 | 170 |
Strategic | 4286 | 291 | 1971 | 132 | 2844 | 276 |
NormRepCom | 3820 | 299 | 1746 | 162 | 2400 | 264 |
Figure 6. Results from table 6 shown in graphical form. Notice that, since these experiments use mixed populations, statistics for each strategy group are given separately for comparison. The thick black vertical line separates two independent experiments |
Table 7: Partitioned into 10 groups (non-random) | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 4767 | 247 | 2158 | 154 | 3285 | 259 |
NormRep | 3416 | 267 | 588 | 108 | 1041 | 212 |
Strategic | 3906 | 326 | 2297 | 111 | 2459 | 304 |
NormRepCom | 4370 | 338 | 734 | 143 | 1874 | 271 |
Figure 7. Results from table 7 shown in graphical form. Notice that, since these experiments use mixed populations, statistics for each strategy group are given separately for comparison. The thick black vertical line separates two independent experiments. Note that, in the results to the right of the line, the strength (Str) of the NormRepCom group is higher than the Strategic group |
Table 8: Partitioned into 2 groups (non-random) | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 4281 | 314 | 2324 | 121 | 2813 | 266 |
NormRep | 3960 | 294 | 667 | 136 | 1537 | 262 |
Strategic | 3730 | 373 | 2255 | 180 | 2312 | 339 |
NormRepCom | 4559 | 398 | 713 | 145 | 2025 | 306 |
Figure 8. Results from table 8 shown in graphical form. Notice that, since these experiments use mixed populations, statistics for each strategy group are given separately for comparison. The thick black vertical line separates two independent experiments. Note that, in the results to the right of the line, the strength (Str) of the NormRepCom group is higher than the Strategic group (higher than in the 10 group case - shown in figure 7) |
Table 9: Partition into 2 groups (one swap) | ||||||
Str | st.dev | Var | st.dev | Agg | st.dev | |
Strategic | 4374 | 292 | 2168 | 151 | 2937 | 301 |
NormRep | 3683 | 332 | 1351 | 252 | 2213 | 292 |
Strategic | 4146 | 334 | 1932 | 150 | 2666 | 307 |
NormRepCom | 3993 | 337 | 1791 | 184 | 2561 | 297 |
Figure 9. Results from table 9 shown in graphical form. Notice that, since these experiments use mixed populations, statistics for each strategy group are given separately for comparison. The thick black vertical line separates two independent experiments. Note that, in the results to the right of the line, the strength (Str) of the NormRepCom group is lower than the Strategic group (hence the advantage seen in figure 8 has disappeared) |
2 For sure, these are only a subset of norms. Norms may "enable" choice rather than prescribe and do not need to be at odds with individualistic self-interested behaviour. It would seem that the preoccupation with this subset of social norms is due to the seemingly incongruous nature of them when set against (classical conceptions of) self-interested behaviour and more recently myopic optimising (via say, some evolutionary process).
3 It would appear that there are many mechanisms for resolving such conflicts in different contexts such as central authority, differential power, negotiation and the "shadow of the future" etc.
4 We have visited this framework again for three main reasons, firstly, we check the robustness of previous results via replications and secondly, results have some comparability with the other work and finally, the framework seems to capture minimally the kinds of interactions needed for a study of normative behaviour.
5 The re-implemented model relaxes synchronous agent action assumptions; this is discussed later.
6 For more on the nature of artificial society methodology see Hales (1998b) and Hales (2001) chapter 3.
7 As stated later, agents are selected randomly from the population to perform actions. To consume food therefore, and agent has to be selected two times from the population. It is therefore highly likely that other agents will have a chance to act (and attack) the agent to "steal" the food, if this is possible.
8 In this way, since agents are tightly packed on the grid and compete over food, several agents can end-up continually snatching food from each other and never actually eating. Hence a social dilemma emerges since everyone would be better off if the food was shared out.
9 If an agent dies it is not replaced. It is therefore theoretically possible that a substantial number of agents could die during a simulation run. However, the initial energy levels (see later) and food energy values where selected such that agent death is very rare - at the most, one or two agents die for some small proportion of the simulation runs. For the substance of the findings this detail can be effectively ignored.
10 The food items and agents are distributed informally randomly over the grid under the constraint that a cell can contain only a single agent, food item or agent and food item.
11 This means that it is quite possible (indeed probable) that in a given cycle some agents may get to act more than once and other agents may not get a chance to act at all. This method of agent selection was chosen since it was a further relaxation (which would test the robustness of the existing findings) and would remove any artefacts that might result from sequential agent selection (see Hegselmann and Flache 1998).
12 There is an assumption that agent reports are completely reliable. So once an agent is identified as a cheater, it can never have it's reputation redeemed - no matter how it acts from that point on.
13 The value given is not simple variance but the standard deviation over the energy values for each of the agents at the end of the run. The label "Var" as been used to avoid confusion with the adjacent standard deviation columns over the 100 runs.
14 In Castelfranchi et al. (1998) it is stated that several grid sizes and agent densities were experimented with giving broadly similar results. However, for the current model this has not been tested. Certainly, a exploration of density parameters would give a better understanding of the model, specifically how sensitive the results are to densities. This could be the focus of future study.
15 Currently, the specific mechanisms that produce this difference have not been investigated.
16 A common saying that sums this situation up goes: "it only takes one rotten apple to spoil the whole barrel".
17 For some previous attempts to address these issues see Hales (1998c) and Hales (2001). However, in both these previous attempts complexity appears to have got the better of the author. However, later in the paper future work is discussed which attempts to make new progress along these lines.
18 We use this term only to denote action selection (myopically) directed by individual self-interest.
19 By "significant number" is meant, enough other agents in the population such that it is likely that agent p will come to directly or indirectly spread reputational information to them. Obviously, in the given model spatial and communication issues become signification here.
20 Tags are features attached to agents that represent "social cues". These are features that can be observed by other agents in order to categorise them as members of some social group.
21 At the suggestion of one reviewer of a previous draft of this paper, further simulations were run in which non-equal proportions of norm followers and strategic agents were analysed. In experiments with 10 non-random groups but with agents distributed 30% to 75% strategic to norm followers showed greater advantages to norm followers. In the reverse condition strategic agents outperformed the normatives. This strongly suggests that for normatives to evolve from a small to a large group, they would have to be spatially grouped - this deserves future study.
22 There is empirical evidence that humans are very predisposed to "groupish" behaviour - even in quite artificially constructed "commons dilemma" scenarios where interaction is limited - see Kramer and Brewer (1984) for empirical results from experiments with real human groups.
23 There is a large weight of empirical experiments and observations of real humans (as well as our daily experience of life) which indicates that humans often do use gross stereotyping in many social situations - see Oakes et al. (1994).
CASTELFRANCHI,C., Conte, R., Paolucci, M. (1998) Normative reputation and the costs of compliance. Journal of Artificial Societies and Social Simulation 1(3), https://www.jasss.org/1/3/3.html
CONTE, R. and Castelfranchi, C. (1995) Understanding the functions of norms in social groups through simulation. In Gilbert, N. and Conte, R. (Eds.) Artificial Societies - The Computer Simulation of Social Life. London: UCL Press. pp. 74-118.
CONTE, R. and Paolucci, M. (2002) Reputation in Artificial Societies. Social Beliefs for Social Order, Kluwer.
EPSTEIN, J. M. (2001) Learning to be thoughtless: Social norms and individual computation. Computational Economics 18, 9-24.
FLENTGE,F., Polani, D. and Uthmann, T. (2001) Modelling the Emergence of Possession Norms using Memes. Journal of Artificial Societies and Social Simulation 4(4), https://www.jasss.org/4/4/3.html
HALES, D. (1998a) An Open Mind is Not an Empty Mind - Experiments in the Meta-Noosphere. The Journal of Artificial Societies and Social Simulation 1(4), https://www.jasss.org/1/4/2.html
HALES, D. (1998b) Artificial Societies, Theory Building and Memetics. In Proceedings of the 15th International Conference on Cybernetics, International Association for Cybernetics (IAC), Namur: Belgium. Available at: http://www.davidhales.com
HALES, D. (1998c) Stereotyping, Groups and Cultural Evolution. In Sichman, J., Conte, R., & Gilbert, N. (Eds.) Multi-Agent Systems and Agent-Based Simulation. Lecture Notes in Artificial Intelligence 1534. Berlin: Springer-Verlag. Available at: http://www.davidhales.com
HALES, D. (2000) Cooperation without Space or Memory: Tags, Groups and the Prisoner's Dilemma. In Moss, S., Davidsson, P. (Eds.) Multi-Agent-Based Simulation. Lecture Notes in Artificial Intelligence 1979. Berlin: Springer-Verlag.
HALES, D. (2001) Tag Based Cooperation in Artificial Societies. Unpublished PhD Thesis. University of Essex. Available at: http://www.davidhales.com/thesis
HALES, D. (2002a) The Evolution of Specialization in Groups. Presented to the RASTA'02 workshop at the AAMAS 2002 Conference. To be published by Springer-Verlag.
HALES, D. (2002b) Evolving Specialisation, Altruism and Group-Level Optimisation Using Tags. Presented to the MABS'02 workshop at the AAMAS 2002 Conference. To be published by Springer-Verlag.
HOBBS, T. (1962) Leviathan, Fontana. Available at: http://www.orst.edu/instruct/phl302/texts/hobbes/leviathan-contents.html
HEGSELMANN, R. and Flache, A. (1998) Understanding complex social dynamics. A plea for cellular automata based modelling. Journal of Artificial Societies and Social Simulation 1(3), https://www.jasss.org/1/3/1.html
JENNINGS, N. and Campos, J. (1997). Towards a Social Level Characterisation of Socially Responsible Agents. IEE proceedings on Software Engineering, 144(1):11-25.
KALENKA, S. and Jennings, N. (1999). Socially Responsible Decision Making by Autonomous Agents. In Korta, K., Sosa, E. and Arrazola, X., (Eds.), Cognition, Agency and Rationality. Kluwer.
KRAMER, R. and Brewer, M. (1984). Effects of Group Identity on Resource Use in a Simulated Commons Dilemma. Journal of Personality and Social Psychology. 46(5), pp. 1033-1047.
MOSS, S. (2002) Policy analysis from first principles. In Proceedings of the U.S. National Academies of Science, Vol. 99, Supp. 3, pp. 7267-7274.
OAKES, P. et al. (1994) Stereotyping and Social Reality. Blackwell, Oxford.
RIDLEY, N. (1996). The Origins of Virtue. Penguin Books, London.
RIOLO, R., Cohen, M. D. & Axelrod, R. (2001), Cooperation without Reciprocity. Nature 414, 441-443.
SAAM, N. and Harrer, A. (1999) Simulating Norms, Social Inequality, and the Functional Change in Artificial Societies. Journal of Artificial Societies and Social Simulation 2(1), https://www.jasss.org/2/1/2.html
SHOHAM, Y. and Tenneholtz, M. (1992) On the synthesis of useful social laws for artificial agent societies (preliminary report). In Proceedings of the AAAI Conference. pp. 276-281.
STALLER, A. and Petta, P. (2001) Introducing Emotions into the Computational Study of Social Norms: A First Evaluation. Journal of Artificial Societies and Social Simulation 4(1), https://www.jasss.org/4/1/2.html
WALKER, A. and Wooldridge, M. (1995) Understanding the emergence of conventions in multi-agent systems. In Proceedings of the International Joint Conference on Multi-Agent Systems (ICMAS) San Francisco.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, [2002]