Figure 3 - uploaded by Andreas Diekmann
Content may be subject to copyright.
a: Fractions of strategy types in the buyer population for 200 generations 

a: Fractions of strategy types in the buyer population for 200 generations 

Source publication
Chapter
Full-text available
In online interactions in general, but especially in interactions between buyers and sellers on internet-auction platforms, the interacting parties must deal with trust and cooperation problems. Whether a rating system is able to foster trust and cooperation through reputation and without an external enforcer is an open question. We therefore explo...

Context in source publication

Context 1
... = - 1 ). If the trust shown by the buyer has been honored by the seller, the seller receives a positive rating ( b = 1 ). Note that the buyer is never rated by the seller and a seller to whom trust was refused is not rated either. A generation consists of a specified number of rounds. In every round an agent interacts with a randomly assigned opponent. Given those parameters, it is improbable that the same buyer-seller pair will occur twice in a generation. Repeated interactions between the same two agents are therefore not accounted for. In every interaction, the buyers decide on the basis of an individually defined strategy whether to trust the sellers or not. In every transaction that takes place as a result of the trust shown by the buyers, the sellers decide on the basis of their own individually defined strategies whether to honor the buyer’s trust or not. Sellers only know their own interaction history. Thus sellers do not know how often a buyer showed no trust. They do, however, know something about the mistrust of buyers in general, since they know how often they were exposed to it themselves. Buyers know their own interaction history and the whole transaction history of the sellers with whom they interacted within the same generation. Explicitly, the buyers know the number of negative ( u ) and positive ( v ) ratings a seller has received, as well as his reputation index r . The reputation index r is calculated from his positive ratings as a fraction of the total number of his transactions ( r = v / (u + v) ). A seller who has not yet engaged in a transaction has no ratings and, as a consequence, a reputation index of r = 0 . The buyer population and the seller population each consist of m = 256 agents. In every generation, every seller and every buyer is involved in k interactions (the length of a generation is therefore k rounds). In order to avoid end-round effects, the generation length is not known before the simulation is executed. At the beginning of the first generation, the buyer strategies are distributed in equal proportions throughout the buyer population; the same applies to the distribution of seller strategies in the seller population. At the end of every generation, the payoffs that all agents with the same strategy have obtained are summed into a total score. Then, in the next generation the strategies are distributed throughout the population of buyer and seller agents respectively, in proportion to the scores they have obtained. Neither history, ratings nor the reputation of a parent agent are transferred to a child agent in the next generation. First, we conduct simple scenario experiments in which we test different ad-hoc buyer and seller strategies. Therewith, we explore the effects on the level of trust and cooperation in the system the properties of the employed strategies and the combination of those have. Second, we arrange tournaments and ask participants to submit their own strategies to compete against other strategies. The tournament approach allows us both to make use of “distributed natural intelligence” in deriving challenges to the mechanisms of our artificial market and to check our experimental findings. 1 In the first scenario experiment we start with a buyer population, half of which consists of agents who always give trust to a seller (ALL C). The other half is composed of agents who never give trust to a seller (ALL D) (see Figure 2a). The seller population in turn is dominated by agents that always honor trust. However, one percent of the seller population consists of agents that never honor trust given by a buyer (see Figure 2b). Although the initial fraction of untrustworthy sellers is very small, these sellers manage to invade the population of trustworthy sellers after 15 generations. At first, the seller population is trustworthy at a very high rate. Agents playing ALL D in the buyer population die out quite early because giving trust is a more successful strategy. After the distrustful buyers have died out, the abuse of trust starts to spread over the seller population. Figure 2c shows that in the beginning, trust is honored at a very high rate, but disappears from the market after the mistrustful buyers’ interactions in which no trust is given. What is left is a market of exploiting sellers and exploitable buyers. In order to protect trustful buyers from exploitation, we introduce a rating mechanism and a buyer strategy which accounts for a seller’s reputation. The initial seller population in scenario 2 consists one half each of trustful and deceitful sellers (see Figure 3b). The buyer population contains, besides the two unconditional strategies ALL C and ALL D, the strategy RT 25. Strategy RT 25 only gives trust to a seller whose reputation index is greater than or equal to r = 0.25 . 2 The initial fraction of both unconditional strategies is 46% of the buyer population. Accordingly, RT 25 starts at a fraction of 8% of the buyer population (see Figure 3a). From the beginning, buyers playing RT 25 discriminate against sellers who do not have a positive reputation. This causes a decrease of the fraction of sellers playing ALL D on the one hand and an increasing fraction of buyers playing RT 25 on the other hand. The strategy RT 25 is more successful than ALL C because it does not get exploited by deceitful sellers. As soon as there are enough trustful sellers and a low rate of sellers playing ALL D respectively, buyers playing ALL C become more successful. Because of their unconditional cooperation, buyers playing ALL C also give trust to sellers without a reputation whereas buyers playing RT 25 do not cooperate with sellers who have not yet established a positive reputation. With the rising success of buyers playing ALL C, sellers playing ALL D have more opportunities to abuse trust and in turn become more successful themselves. At the peak of the deceitful sellers’ success, again, the fraction of buyers playing RT 25 increases while the fraction of buyers playing ALL C decreases. This coevolutionary dynamics is well mapped by the transaction frequencies depicted in Figure 3c. In terms of the interdependence between the abuse of trust, reputation, mistrust, and trust, the dynamics can be described as follows. Starting with a growing rate of abused trust, the reputation mechanism ensures untrustworthy sellers to be recognized and discriminated against. A rising rate of mistrust due to the distrustful buyers’ preference of sellers with a positive reputation is followed by a more confident buyer population and a higher rate of honored trust. The approximate averages of transaction frequencies level off at 60% for Honored Trust, 25% for Mistrust and 15% for Abused Trust. However, if the initial fraction of agents playing RT 25 is below 7%, buyers playing ALL C die out too early, depriving trustworthy sellers of the possibility to build a good reputation. Without a reputation, the strategy RT 25 is not able to distinguish between trustworthy and deceitful sellers and therefore would never give trust in any interaction. Without trustful buyers no transactions take place and without transactions a market does not persist. The results from scenario 2 suggest that trustworthy sellers must get the opportunity to build a positive reputation. Otherwise, conditional strategies like RT 25 would not be able to distinguish between trustworthy and deceitful sellers. A buyer strategy which incorporates that ability and, at the same time, gives sellers without a reputation the opportunity to prove their trustworthiness, is independent of the existence of a more trustful strategy like ALL C. In this scenario, we introduce the strategy RT 25 C. The only difference between RT 25 and RT 25 C is that RT 25 C unconditionally cooperates in its first interaction. Additionally, the initial fraction of buyer agents playing RT 25 C is 6%. Note that at this initial level, the strategy RT 25 failed to establish and maintain a system based on trust and cooperation. Figures 4a, b, and c show the effect of the slight change in buyer strategies. Indeed, the fraction of buyers playing ALL C decreases and the strategy RT 25 C is more successful in this regard. However, it is not self- evident that the success of RT 25 C implies a higher rate of trust and cooperation. RT 25 C, with its single unconditional cooperation is not contingent on confiding buyers playing ALL C and, at the same time, is more successful because not exploitable. Since ALL C fails to survive in the buyer population, the deceitful seller strategy ALL D is not able to persist either. What is left, is a functioning market with trustful but cautious buyers and trustworthy sellers. The scenario experiments show that, despite the lack of an external sanctioning authority, cooperation evolves under two conditions. On the one hand, a minimal fraction of buyers must make use of the sellers’ reputation in their buying strategies. On the other hand, trustworthy sellers must be given opportunities to gain a good reputation through their cooperative behavior. Strategies which account for both these requirements turn out to be successful and conducive to a market based on trust and cooperation at the same time. By organizing round-robin tournaments, we aim to find new seller and buyer strategies that in competing with each other challenge our artificial market system. Strategies were collected at four different occasions and were mostly handed in by students or graduates from different universities and faculties. In every tournament, the best buyer and seller strategy could win a book token that amounted to 20 Euros. The rules to the tournament were told in writing and contained virtually the same information as the section describing the simulation experiments. Figures 5a, b, and c, show a typical run of a tournament with 11 buyer and 22 seller strategies – all strategies that have been handed in so far. KELLER, the winning ...

Similar publications

Article
Full-text available
Recent theoretical contributions have suggested a theory of leadership that is grounded in complexity theory, hence regarding leadership as a complex process (i.e., nonlinear; emergent). This article tests if complexity leadership theory promotes efficiency in work groups. 40 groups of five participants each had to complete four decision making tas...

Citations

... For example, Pahl-Wostl and Ebenhöh (2004) use a heuristic approach in their agent-based model by relating attributes like cooperativeness, fairness, risk aversion, negative and positive reciprocity to trustworthiness. However, most models depict trust in specific contexts, such as supply chains (Kim 2009;Meijer and Verwaart 2005;Tykhonov et al. 2008), electronic markets (Breban and Vassileva 2002;Diekmann and Przepiorka 2005), service consumers (Maximilien and Singh 2005), or peer-to-peer networks (Li et al. 2011). The explanatory power of agent-based systems can be significantly increased by integrating qualitative and quantitative findings from empirical studies (Nooteboom 2015). ...
Chapter
Trust plays a pivotal role in many different contexts and thus has been investigated by researchers in a variety of disciplines. In this chapter, we provide a comprehensive overview of methodological approaches to investigating trust and its antecedents. We explain how quantitative methods may be used to measure expectations about a trustee or instances of communication about trust efficiently, and we explain how using qualitative measures may be beneficial to researching trust in less explored contexts and for further theory development. We further point out that mixed methods research (uniting both quantitative and qualitative approaches) may be able to grasp the full complexity of trust. Finally, we introduce how agent-based modeling may be used to simulate and predict complex trust relationships on different levels of analysis. We elaborate on challenges and advantages of all these different methodological approaches to researching trust and conclude with recommendations to guide trust researchers in their planning of future investigations on both situational trust and long-term developments of trust in different contexts, and we emphasize why we believe that such undertakings will benefit from interdisciplinary approaches.
... However, the binary reputation may not be realistic in that only the last behavior of an individual in the social dilemma situation determines the reputation of the individual. In fact, experimental (Wedekind & Milinski, 2000;Milinski et al., 2001;Seinen & Schram, 2006;Milinski et al., 2002;Wedekind & Braithwaite, 2002;Keser, 2003;Bolton & Katok, 2004;Bolton et al., 2005;Engelmann & Fischbacher, 2009) and numerical (Diekmann & Przepiorka, 2005) studies of indirect reciprocity in the context of online marketplaces often assume that the reputations are many valued, which complies with the reality of online marketplaces (Resnick & Zeckhauser, 2002;Resnick et al., 2006). More than binary valued reputations have also been employed in numerical studies of indirect reciprocity in theoretical biology literature (Nowak & Sigmund, 1998b;Leimar & Hammerstein, 2001;Mohtashemi & M Roberts, 2008). ...
... In previous literature, the Self strategy is recognized as a strong competitor that spoils cooperation in the indirect reciprocity game with more than two possible reputation values (Leimar & Hammerstein, 2001). Our conclusion that image scoring can support cooperation is consistent with the results derived from behavioral experiments (Wedekind & Milinski, 2000;Milinski et al., 2001;Seinen & Schram, 2006) and those derived from numerical simulations (Nowak & Sigmund, 1998b;Diekmann & Przepiorka, 2005) but opposite to those derived from the binary reputation model (Panchanathan & Boyd, 2003;Ohtsuki, 2004;Ohtsuki & Iwasa, 2004;Ohtsuki & Iwasa, 2007). We reached this conclusion by simply introducing a third reputation to the standard binary model of indirect reciprocity. ...
Article
Full-text available
Indirect reciprocity is a reputation-based mechanism for cooperation in social dilemma situations when individuals do not repeatedly meet. The conditions under which cooperation based on indirect reciprocity occurs have been examined in great details. Most previous theoretical analysis assumed for mathematical tractability that an individual possesses a binary reputation value, i.e., good or bad, which depends on their past actions and other factors. However, in real situations, reputations of individuals may be multiple valued. Another puzzling discrepancy between the theory and experiments is the status of the so-called image scoring, in which cooperation and defection are judged to be good and bad, respectively, independent of other factors. Such an assessment rule is found in behavioral experiments, whereas it is known to be unstable in theory. In the present study, we fill both gaps by analyzing a trinary reputation model. By an exhaustive search, we identify all the cooperative and stable equilibria composed of a homogeneous population or a heterogeneous population containing two types of players. Some results derived for the trinary reputation model are direct extensions of those for the binary model. However, we find that the trinary model allows cooperation under image scoring under some mild conditions.
... How does the reputation-based cooperation emerge through population dynamics? Non-evolutionary numerical results for the trust game with reputation mechanisms [39,40] do not explain the stability and emergence of cooperation. An evolutionary theory [9,25] and numerical simulations [41] showed that reputations induce cooperation in the trust game. ...
Article
Full-text available
Many online marketplaces enjoy great success. Buyers and sellers in successful markets carry out cooperative transactions even if they do not know each other in advance and a moral hazard exists. An indispensable component that enables cooperation in such social dilemma situations is the reputation system. Under the reputation system, a buyer can avoid transacting with a seller with a bad reputation. A transaction in online marketplaces is better modeled by the trust game than other social dilemma games, including the donation game and the prisoner's dilemma. In addition, most individuals participate mostly as buyers or sellers; each individual does not play the two roles with equal probability. Although the reputation mechanism is known to be able to remove the moral hazard in games with asymmetric roles, competition between different strategies and population dynamics of such a game are not sufficiently understood. On the other hand, existing models of reputation-based cooperation, also known as indirect reciprocity, are based on the symmetric donation game. We analyze the trust game with two fixed roles, where trustees (i.e., sellers) but not investors (i.e., buyers) possess reputation scores. We study the equilibria and the replicator dynamics of the game. We show that the reputation mechanism enables cooperation between unacquainted buyers and sellers under fairly generous conditions, even when such a cooperative equilibrium coexists with an asocial equilibrium in which buyers do not buy and sellers cheat. In addition, we show that not many buyers may care about the seller's reputation under cooperative equilibrium. Buyers' trusting behavior and sellers' reputation-driven cooperative behavior coevolve to alleviate the social dilemma.
... The purpose of the models varies widely. Some study the effectiveness of sanctions and/or reputation mechanisms and agencies to support them, e.g. in information systems or supply chains (Zacharia et al., 1999;Meijer & Verwaart, 2005;Diekmann & Przepiorka, 2005), or in artificial societies (Younger, 2005). ...
Article
Agent based simulation is useful for exploring possible worlds, seeing what might happen under what conditions as a result of complex interaction between agents, as in the building and breaking of trust. A survey of some attempts is given, and a specific case is summarized. Shortcomings and problems are also indicated. BIOGRAPHY Bart Nooteboom is author of ten books, e.g. A cognitive theory of the firm (2009), Inter-firm collaboration, learning and networks: An integrated approach (2004), Trust: Forms, foundations, functions, failures and figures (2002), Learning and innovation in organizations and economies (2000), Inter-firm alliances: Analysis and Design (1999), and some 300 articles on entrepreneurship, innovation and diffusion, organizational learning, transaction cost theory, inter-firm relations, and trust. He was awarded the Kapp prize for his work on organizational learning and the Gunnar Myrdal prize for his work on trust. He is a member of the Royal Netherlands Academy of Arts and Sciences.
... According to a model proposed by Ostrom (1998), trust, reciprocity and reputation are linked by a positive feedback mechanism, and Dasgupta (1999) pointed out that 'trust is based on reputation, and reputation is acquired on the basis of observed behaviour over time'. Trust has been a topic in economic research for some time (Gambetta 1988;Berg et al. 1995;Gü th et al. 1997;McCabe & Smith 2000;McCabe et al. 2003;Krueger et al. 2007;Rigdon et al. 2007), as well as the connection between trust and reputation (Lahno 1995;Diekmann & Przepiorka 2005;King-Casas et al. 2005), but only recently the connection between trust, reputation and reciprocity has been investigated (Bravo & Tamburino 2008). Using evolutionary modelling, Bravo and Tamburino have shown that trust and reciprocity, and therefore cooperation, are indeed dependent on reputation building and the spread of information about others' reputation in the population. ...
Article
Full-text available
Empirical and theoretical evidence from various disciplines indicates that reputation, reputation building and trust are important for human cooperation, social behaviour and economic progress. Recently, it has been shown that reputation gained in games of indirect reciprocity can be transmitted by gossip. But it has also been shown that gossiping has a strong manipulative potential. We propose that this manipulative potential is alleviated by the abundance of gossip. Multiple gossip statements give a better picture of the actual behaviour of a person, and thus inaccurate or fake gossip has little power as long as it is in the minority. In addition, we investigate the supposedly strong connection between reciprocity, reputation and trust. The results of this experimental study (with 11 groups of 12 students each) document that gossip quantity helps to direct cooperation towards cooperators. Moreover, reciprocity, trust and reputations transferred via gossip are positively correlated. This interrelation might have helped to reach the high levels of cooperation that can be observed in humans.
... On the other hand, if the second subject cheats the first one, he/she can obtain a good payoff while the first one bears heavy losses (Coleman 1990;Dasgupta 1988). A model that better reproduces the structure of interaction underlying a non-simultaneous exchange is the so-called Trust game (hereafter TG), which is basically an asymmetric sequential PD (see Figure 1 and below) and represents the current standard of formalization for this class of social situations (Buskens 1998;Dasgupta 1988;Diekmann and Przepiorka 2005;Lahno 1995;McCabe and Smith 2000;Snijders and Kerens 1998). ...
... As far as we know, the problem of the development of cooperation in a TG situation has been previously explored by only one agent-based simulation work. Andreas Diekmann and Wojtek Przepiorka (2005) model the interaction between buyers and sellers on internet auction platforms using a tournament-style analysis derived from Axelrod's (1984) work. The authors found an important, but insufficient, effect of reputation on the development of cooperative strategies in that setting. ...
... This is mainly due to the intrinsic limitations of the game-theoretical approach, which needs to define relatively simple settings in order to derive clear-cut results. However, even the simulation model designed by Diekmann and Przepiorka (2005) explores only a limited number of deterministic strategies, mainly because of its authors' choice of employing a tournament approach. The strategies used by the agents in our model are instead stochastic (see below): as in Diekmann and Przepiorka, their moves are affected by the opponent reputation, but their choices are probabilistic and not fixed. ...
Article
Trust is an important concept that intersects a number of different disciplines, including economics, sociology, and political science, and maintains some meaning even in the natural sciences. Any situation where non-simultaneous exchanges between living organisms take place involves a problem of trust. We used computer simulations to study the evolution of trust in non-simultaneous exchange situations formalized by means of a Trust game. We found that trust and reciprocity-based cooperation are likely to emerge only when agents have the possibility of building trustworthy reputations and when the information regarding agents' past behaviors is sufficiently spread in the system. Both direct and indirect reciprocity play a role in fostering cooperation. However, the strength of the latter is greater under most of the examined conditions. In general, our findings are consistent with theories arguing for a positive feedback relationship between trust, reputation, and reciprocity, leading together to higher levels of cooperation.
... Generally, they study emergent properties of complex interaction that would be hard or impossible to tackle analytically. Some study the effectiveness of sanctions and/or reputation mechanisms and agencies to support them, e.g. in information systems or supply chains (Zacharia et al., 1999; Meijer & Verwaart, 2005; Diekmann & Przepiorka, 2005), or in artificial societies (Younger, 2005). Some study self-organization, e.g. in the internalization of externalities in a common pool resource (Pahl-Wost & Ebenhöh, 2004), the emergence of leadership in open-source communities (Muller, 2003), or the emergence of cooperative social action (Brichoux & Johnson, 2002). ...
Article
This chapter pleads for more inspiration from human nature, in agent-based modeling. As an illustration of an effort in that direction, it summarizes and discusses an agentbased model of the build-up and adaptation of trust between multiple producers and suppliers. The central question is whether, and under what conditions, trust and loyalty are viable in markets. While the model incorporates some well known behavioural phenomena from the trust literature, more extended modeling of human nature is called for. The chapter explores a line of further research on the basis of notions of mental framing and frame switching on the basis of relational signaling, derived from social psychology.
Article
En esta memoria describimos todo el trabajo realizado como proyecto para la asignatura Sistemas Informáticos de la Ingeniería Superior en Informática de la Universidad Complutense de Madrid. Planteamos un sistema multi-agente basado en la confianza y reputación que realiza una simulación de una cadena de mercados. Para la mejor comprensión del lector se ha creado un apartado de documentación en esta memoria donde describimos en términos generales los conceptos de confianza y reputación y simulación con agentes. El caso de estudio elegido ha sido Mercamadrid y se explicará más adelante los detalles de adaptación del sistema. También hemos incluido una descripción detallada de la estructura y comportamiento del programa así como las herramientas que este proporciona. Por último, exponemos un estudio sobre los resultados de las simulaciones y las conclusiones obtenidas del año de trabajo y orientamos a los interesados en continuar este trabajo. [ABSTRACT] In this document, all the work that has been done as a project for the subject Sistemas Informáticos (Computing Engineering Complutense University) is described. We propose a Multiagent System based on trust and reputation which runs a simulation of a market chain. For your better comprehension we have included a documentation section in which we describe the concepts of both Trust and reputation in Multiagent systems and Agent simulation. We have chosen Mercamadrid as our object to study and the details about the adaptation of this market to our system will be explained later. Furthermore, we have included a detailed description of the structure and behaviour of the program as well as the tools provided. Finally, we put forward a study about the results of the simulations and the conclusions obtained and we, also, guide those who are interested in continuing our work.
Article
Studies of reputation use in social interactions have indicated that when individuals can acquire a positive or negative reputation, they are motivated to act in a cooperative fashion. However, few researchers have examined how the opportunity to confer this reputation on a partner may influence an individual's behavior in a mixed-motive situation. In the present study, an experiment using a trust-game paradigm indicated that participants felt that they had more control over their partner's reputation when they could leave feedback regarding the outcome of their interaction with their partner. However, the participants would only donate a substantial portion of their initial endowment (i.e., over 50%) when they could leave feedback for a partner and when they felt that their partner was concerned about his or her own reputation. The author discusses these findings in regard to how they might apply to real-world reputation use and how possible future studies may further expand knowledge in this area.
Article
Full-text available
Researchers in the field of innovation networks have acknowledged the important role knowledge and interaction play for the emergence of innovation. However, not much research has been done to investigate the behavioural dynamics necessary for the success of innovation networks. Our article deals with this issue in a threefold manner: we combine a theoretical analysis with an empirical validation and set up a multi-agent system based on both, simulating the behavioural dynamic of collaborative R&D. With regard to the theoretical foundation, the cognitive foundations of knowledge generation under bounded rationality are conceptualized. This is linked to a discussion about the role trust plays in the course of economic interaction. Trust itself proves to be a relevant mode of economic (inter)action which enables agents to overcome social dilemmas that might arise in the process of collaborative R&D. For empirical validation, a unique data set is used (23 German innovation networks, containing about 600 agents). Results of the analyses highlight the dynamics and interdependence of knowledge generation and trust as well as the sources of trust building in terms of three different components (generalised trust, specific trust, and institutional trust). The multi-agent system comprises the theoretical and empirical findings, e.g. in incorporating heterogeneity with regard to adaptive capacity, reciprocity and the tolerance of non-reciprocal behaviour. The results give evidence of the (changing) relevance of trust in the course of collaborative R&D. The success of collaborative R&D is determined through a co-evolution of individual and interactive processes of knowledge transformation und trust building.