Abstract
Cooperation is a very common, yet not fully-understood phenomenon in natural and human systems. The introduction of a network within the population is known to affect the outcome of cooperative dynamics, allowing for the survival of cooperation in adverse scenarios. Recently, the introduction of multiplex networks has yet again modified the expectations for the outcome of the Prisoner’s Dilemma game, compared to the monoplex case. However, much remains unstudied regarding other social dilemmas on multiplex, as well as the unexplored microscopic underpinnings of it. In this paper, we systematically study the evolution of cooperation in all four games in the T − S plane on multiplex. More importantly, we find some remarkable and previously unknown features in the microscopic organization of the strategies, that are responsible for the important differences between cooperative dynamics in monoplex and multiplex. Specifically, we find that in the stationary state, there are individuals that play the same strategy in all layers (coherent) and others that don’t (incoherent). This second group of players is responsible for the surprising fact of a non full-cooperation in the Harmony Game on multiplex, never observed before, as well as a higher-than-expected cooperation rates in some regions of the other three social dilemmas.
Similar content being viewed by others
Introduction
Cooperation is a ubiquitous and yet not fully-understood phenomenon in Nature: from humans that cooperate to build complex societies to animals like wolves that hunt in packs in order to catch preys larger than they are, or meerkats that watch out for predators in turn while the rest of the colony feeds. Even small microorganism cooperate to survive in hostile environments. For instance, the Dictyostelium discoideumu, usually a solitary amoeba, when starves it associates with others in order to form a multicellular slug for the sake of survival. Explaining how cooperation has emerged and has resisted against more selfish behaviours is one of the biggest challenges in natural and social sciences. From a mathematical point of view, the problem of cooperation within a population can be studied using Evolutionary Game Theory1,2,3. There are multiple mechanisms proposed to explain the evolution of cooperation, such as reputation, kin selection, network reciprocity or punishment4,5,6. On the other hand, outstanding experimental efforts have been made in the last few years7,8,9,10,11 to try to understand how actual humans behave when confronted with social dilemmas in a formal Game Theory environment.
We focus here on the impact of the structure of the network of interactions among individuals on the outcomes of the cooperation dynamics. The study of networks, their properties and dynamics, has experimented a huge advance in the last few decades, empowered by the technological advances that enable the acquisition of real data about interactions between individuals from social networks12,13, mobile communication networks14 or collaborations between scientific authors15. There is a vast literature on the evolution of cooperation on complex networks16,17,18, studying aspects ranging from the effect of network topology on cooperation19 to network growth driven by cooperation dynamics20,21 and other spatial and temporal effects22 that offer insights on how cooperation can evolve and survive in different scenarios.
An innovative way of representing multiple types of social interactions in one single structure is the use of multiplex networks (see Fig. 1)23,24,25,26, which have been already successfully applied to the study of disease spreading27 and diffusion dynamics28 (for a complete review look at Ref. 29). Multiplex networks are interesting in this field, because many social interactions can be understood as a combination of interactions at different, independent levels, each one representing a different social scenario such as family, friends, coworkers, etc. An individual’s behaviour can be different in each level, but it is ultimately conditioned by all of them. Some work has been done to understand the evolution of the Prisoner’s Dilemma game on multiplex networks30, exploring different coupled evolutionary games using a interdependent networks31. The impact of the degree correlations among layers32 on the outcome of social dilemmas have also been studied on 2-layer network, where one layer was used for the accumulation of payoffs and the other for strategy updating. There are also works that explore the problem of cooperation on coupled networks33 and even optimizing the interdependence between them via coevolution34. However, the evolution of cooperation on top of multiplex networks with any number of layers hasn’t been systematically studied for all four social dilemmas.
The objective of this paper is, on the one hand, to provide an exhaustive analysis of the resilience and propagation of cooperation in the main four social dilemmas in Game Theory literature, studying the average levels of cooperation, payoff distribution and dependence on the initial fraction of cooperation, as a function of the number of layers of the multiplex. More importantly, we will focus on analyzing the previously unexplored microscopic behaviour of individuals across layers.
This work is organized as follows. In the Model Section we define the model we have used in this work. The Results Section contains our findings on the density of cooperators for each one of the proposed scenarios. Then we turn our attention to the microscopic behaviour of individuals across different layers. Finally, a summary and conclusions can be found in the Discussion Section.
Model
We will focus on two-strategy social dilemmas. If we assume that each player in the system can either cooperate (C) or defect (D), a game can be defined according to its payoff matrix:
Where R represents the reward obtained by a cooperator playing against another cooperator, S is the sucker payoff obtained by a cooperator when it plays against a defector, the temptation payoff, T, is the payoff received by a defector when his opponent is a cooperator and finally, P represents the payoff obtained by a defectors which engages with another defector.
Traditionally the values of R and P are fixed to R = 1 and P = 0 in order to provide a fixed scale for the game payoffs35,36. Applying this constraint, it turns out that the selection of the remaining parameters T and S enables the definition of several games according to their evolutionary stability. Thus, if R> S> P and R> T> P the game is the harmony game37. The final state of a population playing this game will be total cooperation, regardless of the initial fraction of cooperators. Prisoner’s dilemma38,39,40, T> R> P> S, represents the opposite situation and the population evolves towards total defection regardless of the initial conditions (although all players would be better off cooperating, hence the dilemma). A classical example of a coordination game, the stag-hunt game41,42, is represented when the payoff values respect the order R> T> P> S, the output of this game will be either total defection or total cooperation, depending on the initial conditions. Finally, an anti-coordination game, the Hawk-Dove1,43, takes place if the payoff values follows T> R> S> P, where the final state will be a population made of both cooperators and defectors.
The players sit on the nodes of a multiplex network of L layers. Each node is present in all layers, but in general, they have different connectivity in each layer. Every layer, ℓi, in the multiplex network is a connected and homogeneous Erdős-Rényi (ER) network, with the same number of edges E and nodes N and equal degree distribution and the multiplex network is generated avoiding degree correlations between layers. Each layer is represented by an adjacency matrix Aℓ, where if nodes i and j are connected in that layer and otherwise. That representation enables the definition of the degree of node i in layer l as and its global degree in the multiplex as .
Each round of the game is divided in two phases: payoff recollection and strategy update. Each node i can choose to play one of the two strategies, cooperation or defection, independently in each layer of the network and at every time step, . Within a specific payoff matrix, the node i’s strategy determines the payoff, , that it obtains in a layer l when it plays against all its neighbors. The total payoff of node i can be easily calculated as . At the end of each round, each player can change the strategy in one of its layers, , using the Replicator-like Rule: A node chooses a layer of the multiplex, ℓr, with uniform probability. Then it chooses with uniform probability one of its neighbors, jr, in that layer. If and the probability that node i changes its strategy in layer ℓr is given by:
It is important to notice that the update rule uses global information about the players: global degree and global payoff (that is, added up over all layers), in order to update the strategy of any particular layer. That is the way our model shares information between layers and relies in the social nature of layers’ interdependency30: each player only has information about the strategy of its neighbour in their same layer (but not in those layers where they are not connected). However, it knows its neighbor’s total benefits and it makes the simplifying assumption that it is using the same strategy in every layer. As we will see later on, this fact has a profound impact on the outcomes of the dynamics, compared to the monoplex scenario.
At the end of each time step the density of cooperators can be computed for each layer and for the entire multiplex using:
Results
To ascertain the outcome of the cooperative dynamics for the different games on multiplex networks, we will start by studying the stationary level of cooperation in the system, then we will study the effect of the initial fraction of cooperators and finally, we will move to analyzing in detail the microscopic organization of cooperation for individuals across different layers.
The results are obtained for a range of values of T ∈ [0, 2] and S ∈ [−1, 1] that defines the T − S plane. The simulation runs on a multiplex network that has N = 1000 nodes and E = 3000 edges per layer distributed according an Erdős-Rényi degree distribution with ⟨k⟩. For each possible pair of values of the game parameters the simulation runs 1 ×105 time steps, that is the transient time t0 needed by the algorithm to generally reach a stationary state (we further discuss the matter of convergence time in the Supplementary Information). After this time the algorithm runs for another tγ = 2 ×104 time steps. All the quantities of interest are averaged over this second period of time. The experiments are repeated and averaged over I = 64 different networks and initializations in order to gain statistical confidence. The initial fraction of cooperators, c0, is distributed randomly in each layer. We focus here on the case c0 = 0.5, although we have also explored other values (see Supplementary Information).
Density of cooperators
The stationary average value of cooperation is defined according to the following:
In Fig. 2 we present the average stationary value of cooperation when c0 = 0.5. We observe that our results for the monoplex case (left) are consistent with those obtained by Roca et al.22 for this kind of networks. The results for multiplex networks show a large increase of the areas where both strategies coexist (that is, the areas in the plane that separate total cooperation from total defection). However, this coexistence is of a different nature depending on the evolutionary stability of the particular game (or quadrant), as we explain below.
The Stag Hunt game has an unstable evolutionary equilibrium with mixed populations. This means that, when there is a structure, the population will evolve towards total cooperation or total defection depending on the initial population and type of structure of the network (due to this fact, the standard deviation of the ⟨c⟩ is large in that transition area, see Supplementary Information for details). For the monoplex we have a very narrow transition area between total cooperation or defection populations (left panel in Fig. 2). This transition region widens with the number of layers, enabling the coexistent of both strategies in a larger portion of the game parameter space. The explanation of such behaviour can be found in the inter-layer dynamics: it is more likely that a cooperator or a defector node resists in hostile environments in a particular layer, because its fitness is not evaluated in just that layer, but also in the other layers where, due to its strategy or its topological configuration, the node might have better performance. The Stag Hunt game, where the maximum payoff possible is obtained when a cooperator plays against another cooperator, favors specially the resilience of cooperators nodes when the temptation value is low: a cooperator node i in layer ℓr that has a big payoff Pi has higher probability of spreading its strategy to its defector neighbours in ℓr, thus increasing its payoff. This increase will propagate to the other layers, making the strategies of the player more robust against invasion. Playing defection in layer ℓr when temptation value is small, does not have a big effect in the global payoff of the node. As a consequence, in this particular game the multiplex structure increases specially the resilience of cooperators, thus the average density of cooperators in this game quadrant shows an statistically significant increase as we keep adding layers to the structure (Mann-Whitnet U test, α = 0.05).
In the Prisoner’s dilemma game, defection dominates cooperation. Related papers22 show that for ER networks using Replicator rule when temptation and sucker payoffs are not too large, cooperation can survive forming groups of cooperative clusters, thus resisting against the initial attempt of invasion by defectors and then spread through the population. Our results for the monoplex are consistent with those. For the multiplex, we observe how the transition region between all-cooperator and all-defector situations is larger than for the monoplex, as in the case of Stag Hunt game. It is worth noticing that regions where we have all-cooperator populations in the monoplex, are not necessarily all-cooperator areas in its multiplex counterpart. This happens because the formation of cooperative clusters in one layer will also increase the fitness of these nodes in the other layers regardless of the strategy used in these other layers. And this can lead to a reinforcement of defector strategies due to the inter-layer dynamics, increasing their survival rate. This inter-layer dynamics will led to a widening of the transition area that enables survival of cooperators in areas where they are not present in the monoplex scenario. If we take into account the whole Prisoner’s Dilemma quadrant, the conclusions are the same that in the Stag Hunt game: a statistically significant increase in the average density of cooperators occurs as we increase the number of layers.
The Snow Drift game has a stable equilibrium in mixed populations: it is an anti-coordination game. Previous works22 show that for ER networks there are some regions in the plane T − S for which this game converges to single-strategy populations. For lower values of the temptation these regions are prone to cooperation. In multiplex networks however, single strategy regions are less common and mixed populations are the rule. That happens by the same inter-layer dynamics that we have explained earlier: the impact of a cooperator’s benefits on the other layers of the multiplex structure. This entails a significant reduction on the average fraction of cooperators from 0.734 in the monoplex to 0.661 in the 10-layer multiplex for this quadrant.
Finally, the Harmony game has cooperation as its dominant strategy. For single-layer ER networks with Replicator update rule, Roca et al.22 reported that the whole quadrant ends up in an all-cooperator configuration. However, in the case of multiplex scenarios, the average fraction of cooperators decreases significantly as we keep adding layers to the system: 0.932 for L = 5 and 0.910 for L = 10. This increasing resilience of defection can be explained as a consequence of the multiplex topology and the lack of degree correlations between layers: due to the payoff accumulated by an individual acting as cooperator in some layers, defector nodes can resist against cooperators in other layers.
We can mathematically prove that defectors can survive and be stable in the Harmony game on ER multiplex networks by analyzing the simplest situation: let’s assume a multiplex structure with L layers. In one single layer (for simplicity we assume it will be the first one) we have one single node playing as defector, but it plays as cooperator in all the other L − 1 layers. There are no more defectors anywhere in the system. This node’s connectivity in layer α is kα, and, recalling that R = 1 and P = 0, the total payoff of that node that is defecting in one single layer is given by:
The payoff of any of the node’s neighbors (note that all of them play as cooperators), with a degree in layer α, is:
Thus, in order to survive as a defector in layer α, the following inequality must be fulfilled for each of the node’s neighbours:
We can estimate both a soft and a hard limit for the previous inequality. As a soft limit and assuming we have independent, uncorrelated Erdős-Rényi layers in our multiplex network, we can approximate every kα by ⟨k⟩ and get:
On the one hand, a hard limit for the condition can be calculated by approximating ki by kmax for the cooperator neighbours:
On the other hand, we can calculate the probability of this topological situation happening. First of all we have to define what is the probability of a node i to have degree k, P(X = k). In our model and in order to avoid the non-negligible effect of unconnected nodes, we impose a minimum connectivity, kmin. To get a more accurate approximation of our degree distribution we take into account this minimum:
As it has been stated previously, the payoff of cooperators against cooperators is proportional to their degree, since we set R = 1: in this example we use L − 1 full cooperative layers, so the payoff obtained in this layers is proportional to the degree distribution of the aggregate network of this L − 1 layers. Moreover, the payoff distribution of the nodes that play cooperation in all layers is proportional to the aggregation of all layers, L. Imposing that we do not have inter-layer degree correlation, the degree distribution of the aggregated networks can be modeled using the convolution of the single layer degree distributions.
The probability that a topological configuration that enables the fulfilment of the payoff conditions specified by Eq. (8) exists, is given by:
where q is the payoff obtained by the defector node playing as a cooperator in L − 1 layers. With that information, an upper bound for the aggregated degree of the defector’s neighbours can be defined as q + k1 · T − S + 1 and if all the neighbours have an aggregated degree below this upper bound, the defector can survive. It is worth noticing that the upper bound for the degree of a cooperator is a discretization of payoff values that involve S and T. This means that the survival probability of a defector only changes when the relation between S and T changes by an amount large enough.
The expression for the degree distribution probability function is for an Erdős-Rényi network, assuming that we have a restriction for the minimum degree, so the degree distribution follows a Poisson distribution given by:
In Fig. 3, we show the probability of a defector surviving in a full-cooperative population, calculated numerically using Eq. (15). We observe that this probability increases naturally with T, because this is the payoff that a defector obtains against a cooperator, but it is only slightly dependent of the payoff of a cooperator against a defector, S. The number of layers has a huge impact on this probability: as the number of layers increases, the probability becomes more uniform in the T − S plane, increasing in general. This can be explained by the relative contribution to the accumulated payoff that comes from layer 1 (the layer where the defector survives): the more layers are added to the system, the smaller this relative contribution. For a large number of layers, this implies that the values of S and T (that determine the payoff) are less important in the probability of a defector persisting in the system. For networks with a higher mean degree (see Fig. 4 in the Supplementary Information), however, the chances of a defector surviving are lower: if the number of neighbours of the defector node is higher, then the probability that one of them has more payoff than him is also higher, thus the defector will tend to imitate the neighbour’s behaviour (or in other words, his chances of survival will decrease).
Coherent Cooperation
Prompted by the topological configurations described earlier, we can now define a “coherent cooperator” as a node that, at a given instant of time, plays as cooperator in all L layers of the system. Similarly, we can define a “coherent defector” as a node that, at a given instant of time, plays as defector in all L layers of the system. Finally, those individuals that are neither coherent cooperators nor coherent defectors will be called “incoherent” individuals. This new terms introduced here should not be mistaken for the concepts “pure cooperators”, “pure defectors” and “fluctuating individuals” introduced in Ref. 19, which implied a temporal consistency of the agents’ strategies. Also, we want to stress that a incoherent individual as defined here, is clearly different from the concept of a mixed population, that refers simply to a set of both strategies, coexisting together in a population. Moreover, we have to take into account that a coherent behaviour is not trivial nor easily reachable, due to the fact that our simulations start with all mixed populations (randomly distributed and uncorrelated strategies in all layers), so the dynamics that leads to coherence is specially interesting to study.
In Fig. 4 we show the fraction of coherent cooperators (left column), coherent defectors (middle column) and incoherent individuals (right column) for 5 layers (top row) and 10 layers (bottom row). The formation of coherent cooperators is particularly complicated and it is interesting to notice that even in the Harmony game there is a low fraction of them (except for a small area around the extreme case of T = 0 and S = 1). In the other quadrants, the fraction is very small (in particular, the Prisoner’s Dilemma presents basically no coherent cooperation). This implies that most of the cooperation shown by the system comes from incoherent individuals. We also observe that the fraction of coherent cooperators decreases quickly with the number of layers for any game. As we have said, the origin of such results resides in the fact that a defector takes advantage of its own cooperative behaviour in other layers, specially in regions of the T − S plane prone to cooperation.
Conversely, regarding the fraction of coherent defectors, we observe that their presence is very strong in most of the Prisoner’s Dilemma region and part of the Stag-Hunt area and they decrease only slightly when increasing the number of layers from 5 to 10. This fact is easy to understand: the resilience of a cooperator in a hostile environment is based basically in how he performs as cooperator, the advantage of playing as defector in other layers is practically zero because in a large defector population the contribution to the payoff of a defector that plays against a defector is zero, P = 0. Thus, in these regions, the survival rate of cooperation does not improve by playing as defector strategy in other layers.
Regarding incoherent individuals, we observe that they are very prevalent for all games (except for the extreme area of Harmony around (T = 0, S = 1), where cooperation is very profitable and the bottom-half area of the hard Prisoner’s Dilemma where cooperation is extremely expensive). Incoherent individuals contribute significantly to the average density of cooperation in a large central area of the T − S plane, particularly in the areas that separate full-cooperation from full-defection (See also Fig. 4 in the Supplementary Information for a detailed description of the fraction of incoherent individuals playing as cooperators). This area of prevalent incoherent individuals increases with the number of layers or, in other words, it gets harder and harder to be a coherent strategist as the number of layers increases.
Fig. 4 also confirms what we showed analytically earlier: defection can survive in the Harmony game, as long as the individual defecting in a particular layer is a incoherent individual; it plays as cooperator in other layers and obtains enough payoff from them to avoid having to switch strategies (see also Fig. 5 in Supplementary Information for further detail on the payoff of cooperators and defectors).
Interestingly enough, in Fig. 4 we can observe how coherent players of opposite types do not coexist in the same population. Another important point is where coherent players can coexist with incoherent players. The area where coherent cooperators interact with incoherent players is wide and gets wider as we keep adding layers to the multiplex. However, the area of coexistence of coherent defectors and incoherent players is very narrowed and is only slightly affected when layers are added to the structure. This means that the coherent defection is a very dominant strategy that almost forbids the existence of any other kind of players.
Discussion
In this paper we have presented a systematic and comprehensive analysis of the outcomes of cooperation dynamics on ER multiplex networks for the four games on the T − S plane, when using the Replicator updating rule, comparing our results with those already known for the case of the games on monoplex. Also, we have analyzed the microscopic behavior of the nodes and coined the terms of coherent cooperator, coherent defector and incoherent player.
In particular, we have found that the stationary distribution of cooperation in the plane T − S becomes less sharp as more layers are added. In the monoplex case there is a very narrow area that separates all-cooperator from all-defector areas for the Stag Hunt and Prisoner’s Dilemma games, but in the multiplex scenario we find that it becomes a wider region, with intermediate values of cooperation. We also find that the region of all-defectors shrinks as the number of layers increases. As a counter-effect though, we find a slight decrease in the value of cooperation (even in the quadrant of the Harmony game), from total cooperation to values around 90%. These results are consistent with and generalize those found by Ref. 30: the introduction of a multiplex structure in the population helps promote cooperation in regions of the parameter space in which it can not survive in the monoplex scenario, at the expense of a moderate decrease of cooperation in those where traditionally it was very high. We explored the microscopic underpinnings for these phenomena, previously observed but unexplained in the aforementioned paper.
Thus, regarding the microscopic behavior of the nodes, we have found that in general and at a given time step, there are three types of individuals: those coherently acting as cooperators in all layers, those acting as coherent defectors and a group of incoherent individuals, that play as cooperators in some layers and as defectors in others. The existence of this third incoherent group is at the root of the explanation of the survival of defection in the Harmony Game for a multilayered network and it is also responsible for a large part of the cooperation in the central areas of the T − S plane, where cooperation is lower in a monoplex. Also, we have analyzed how this three types of players interact among them, concluding that there are plenty of interaction between incoherent and coherent cooperators, fewer interactions between incoherent and coherent cooperators and practically no interaction between both types of coherent players. Moreover, this is a very plausible social scenario: some people may behave consistently in all their types of interactions (for example at work, at home, with friends,…) either cooperating or defecting and some other may choose different strategies for different layers (for example, cooperate with family and defect at work). We have found that an the fraction of incoherent players increases with the number of layers increases, which means that as the number of contexts where the a players interact increases, it gets harder to maintain a coherence behaviour in all of them. Regarding the dependence with the initial fraction of cooperation, we found that our system behaves consistently with what was found for the monoplex network and the effect of adding more layers is preserved or even increased with increasing initial fraction of cooperators.
To summarize, the introduction of multiplex networks not only is a more realistic representation of social systems, allowing for more sophisticated individual behaviours, but as it has been shown in other context too, it has a profound effect on the dynamics developing on top of them.
Change history
02 September 2015
A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has been fixed in the paper.
02 September 2015
Cooperation is a very common, yet not fully-understood phenomenon in natural and human systems. The introduction of a network within the population is known to affect the outcome of cooperative dynamics, allowing for the survival of cooperation in adverse scenarios. Recently, the introduction of multiplex networks has yet again modified the expectations for the outcome of the Prisoner’s Dilemma game, compared to the monoplex case. However, much remains unstudied regarding other social dilemmas on multiplex, as well as the unexplored microscopic underpinnings of it. In this paper, we systematically study the evolution of cooperation in all four games in the T − S plane on multiplex. More importantly, we find some remarkable and previously unknown features in the microscopic organization of the strategies, that are responsible for the important differences between cooperative dynamics in monoplex and multiplex. Specifically, we find that in the stationary state, there are individuals that play the same strategy in all layers (coherent), and others that don’t (incoherent). This second group of players is responsible for the surprising fact of a non full-cooperation in the Harmony Game on multiplex, never observed before, as well as a higher-than-expected cooperation rates in some regions of the other three social dilemmas.
References
Smith, J. M. Evolution and the Theory of Games (Cambridge University Press, Cambridge, UK, 1982).
Szabó, G. 8 Fáth, G. Evolutionary games on graphs. Phys. Rep. 446, 97–216 (2007).
Nowak, M. A. Evolutionary Dynamics. Exploring the equations of life (Harvard University Press, Cambridge, MA, 2006).
Nowak, M. A. Five Rules for the Evolution of Cooperation. Science 314, 1560–1563 (2006).
Pennisi, E. How Did Cooperative Behavior Evolve? Science 309, 93–93 (2005).
Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A. 8 Sigmund, K. Via Freedom to Coercion: The Emergence of Costly Punishment. Science 316, 1905–1907 (2007).
Gruji, J., Fosco, C., Araujo, L., Cuesta, J. A. 8 Sánchez, A. Social Experiments in the Mesoscale: Humans Playing a Spatial Prisoner’s Dilemma. PLOS ONE 5, e13749; 10.1371/journal.pone.0013749 (2010).
Gracia-Lázaro, C. et al. Heterogeneous networks do not promote cooperation when humans play a Prisoner’s Dilemma. Proc. Natl. Acad. Sci. U.S.A. 109, 12922–12926 (2012).
Suri, S. 8 Watts, D. J. Cooperation and Contagion in Web-Based, Networked Public Goods Experiments. PLOS ONE 6, e16836; 10.1371/journal.pone.0016836 (2011).
Rand, D. G., Arbesman, S. 8 Christakis, N. A. Dynamic social networks promote cooperation in experiments with humans. Proc. Natl. Acad. Sci. U.S.A. 108, 19193–19198 (2011).
Gruji, J. et al. A comparative analysis of spatial Prisoner’s Dilemma experiments: Conditional cooperation and payoff irrelevance. Sci. Rep. 4; 10.1038/srep04615 (2014).
Wellman, B. 8 Berkowitz, S. D. Social structures: A network approach. Harvard University Press, Cambridge, MA (1988).
Wasserman, S. 8 Faust, K. Social Network Analysis: Methods and applications. (Cambridge University Press, Cambridge, UK, 1994).
González, M. C., Hidalgo, C. A. 8 Barabási, A.-L. Understanding individual human mobility patterns. Nature 453, 779–782 (2008).
Newman, M. E. J. Coauthorship networks and patterns of scientific collaboration. Proc. Natl. Acad. Sci. U.S.A. 101, 5200–5205 (2004).
Santos, F. 8 Pacheco, J. Scale-Free Networks Provide a Unifying Framework for the Emergence of Cooperation. Phys. Rev. Lett. 95, 098104 (2005).
Perc, M. 8 Szolnoki, A. Coevolutionary games A mini review. BioSystems 99, 109–125 (2010).
Perc, M. et al. Evolutionary dynamics of group interactions on structured populations: a review. J. R. Soc. Interface 10, 20120997 (2013).
Gómez-Gardees, J., Campillo, M., Floría, L. 8 Moreno, Y. Dynamical Organization of Cooperation in Complex Topologies. Phys. Rev. Lett. 98, 108103 (2007).
Poncela, J., Gómez-Gardees, J., Floría, L. M., Sánchez, A. 8 Moreno, Y. Complex Cooperative Networks from Evolutionary Preferential Attachment. PLOS ONE 3, e2449; 10.1371/journal.pone.0002449 (2008).
Poncela, J., Gómez-Gardees, J., Floría, L. M. 8 Moreno, Y. in Handbook of Optimization in Complex Networks,. Ch. 5, 115–136 (Springer, Gainesville, FL, 2012).
Roca, C. P., Cuesta, J. A. 8 Sánchez, A. Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics. Phys. Life. Rev. 6, 208–249 (2009).
Mucha, P. J., Richardson, T., Macon, K., Porter, M. A. 8 Onnela, J.-P. Community Structure in Time-Dependent, Multiscale and Multiplex Networks. Science 328, 876–878 (2010).
Kurant, M. 8 Thiran, P. Layered Complex Networks. Phys. Rev. Lett. 96, 138701 (2006).
Kivel, M. et al. Multilayer networks. J. Complex Networks 2, 203–271; 10.1093/comnet/cnu016 (2014).
De Domenico, M. et al. Mathematical Formulation of Multilayer Networks. Phys. Rev. X 3, 041022; 10.1103/PhysRevX.3.041022 (2013).
Granell, C., Gómez, S. 8 Arenas, A. Dynamical Interplay between Awareness and Epidemic Spreading in Multiplex Networks. Phys. Rev. Lett. 111, 128701 (2013).
Gómez, S. et al. Diffusion Dynamics on Multiplex Networks. Phys. Rev. Lett. 110, 028701 (2013).
Boccaletti, S. et al. The structure and dynamics of multilayer networks. Phys. Rep. 544, 1–122 (2014).
Gómez-Gardees, J., Reinares, I., Arenas, A. 8 Floría, L. M. Evolution of Cooperation in Multiplex Networks. Sci. Rep. 2; 10.1038/srep00620 (2012).
Santos, M. D., Dorogovtsev, S. N., Mendes, J. F. F. Biased imitation in coupled evolutionary games in interdependent networks. Sci. Rep. 4; 10.1038/srep04436 (2014).
Wang, Z., Wang, L. 8 Perc, M. Degree mixing in multilayer networks impedes the evolution of cooperation. Phys. Rev. E 89, 052813 (2014).
Wang, Z., Szolnoki, A. 8 Perc, M. Evolution of public cooperation on interdependent networks: The impact of biased utility functions. Europhys. Lett. 97, 48001 (2012).
Wang, Z., Szolnoki, A. 8 Perc, M. Self-organization towards optimally interdependent networks by means of coevolution. New J. Phys. 16, 33041 (2014).
Nowak, M. A. 8 May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).
Hauert, C. 8 Doebeli, M. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428, 643–646 (2004).
Licht, A. N. Games commissions play: 2 ×2 games of international securities regulation. Yale J. Int. Law (1999).
Axelrod, R. Effective Choice in the Prisoner’s Dilemma. J. Confl. Resolut. 24, 3–25 (1980).
Axelrod, R. The Emergence of Cooperation among Egoists. Am. Polit. Sci. Rev. 75, 306 (1981).
Rapoport, A. in Game Theory as a Theory of a Conflict Resolution,. Ch. 1, 17–34 (Springer Netherlands, Dordrecht, NL, 1974).
Rousseau, J.-J. A Discourse on Inequality (Penguin Books, London, UK., 2003).
Luce, R. D. 8 Raiffa, H. Games and Decisions. Introduction and Critical Survey (Courier Dover Publications, New York, NY, 2012).
Smith, J. M. 8 Parker, G. A. The logic of asymmetric contests. Anim. Behav. 24, 159–175 (1976).
Acknowledgements
Authors acknowledge support from MINECO through Grant FIS2012-38266; the EC FET-Proactive Project MULTIPLEX (grant 317532), ICREA Academia and the James S. McDonnell Foundation.
Author information
Authors and Affiliations
Contributions
J.T.M. performed simulations and prepared figures. J.T.M., S.G., A.A. and J.P.C. contributed equally to the design of the experiments, interpretation and discussion of the results and writing of this paper.
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Electronic supplementary material
Supplementary Information
Supplementary information
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder in order to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Matamalas, J., Poncela-Casasnovas, J., Gómez, S. et al. Strategical incoherence regulates cooperation in social dilemmas on multiplex networks. Sci Rep 5, 9519 (2015). https://doi.org/10.1038/srep09519
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/srep09519
This article is cited by
-
Interplay between social influence and competitive strategical games in multiplex networks
Scientific Reports (2017)
-
Quantifying dynamical spillover in co-evolving multiplex networks
Scientific Reports (2015)