Fig 1 - uploaded by Jeremy Pitt
Content may be subject to copyright.
Adapted Synthetic Method [2]  

Adapted Synthetic Method [2]  

Source publication
Conference Paper
Full-text available
Agents that behave maliciously or incompetently are apotential hazard in open distributed e-commerce applications. However human societies have evolved signals and mechanisms based on social interaction to defend against such behaviour. In this paper we present acomputational socio-cognitive framework which formalises social theories of trust, repu...

Contexts in source publication

Context 1
... By simulating a system composed of such agents and observing the outcome, we aim to tailor the formalisms to achieve the desired performance and hence have an agent design that is applicable both socially and economically for a distributed agent medi- ated market. This process is illustrated by the Adapted Synthetic Method shown in Fig. 1 and is explained in more detail in [2]. It is our intent to evaluate the performance of the applied models based on the efficiency, fairness and dynamics of the resulting e-commerce communities. By giving particular attention to the suitability of the economic models employed, we hope to ensure that the results of our future simulation ...
Context 2
... this paper we present the work from the first two stages of Fig. 1, namely our computational formalisms of social theories and the specification of an ar- tificial retail market scenario. The paper is organised as follows. In Section 2, we present a brief specification of our market scenario. In Section 3, we describe an economic model for producer/seller agents. In Section 4, we detail the socio- ...
Context 3
... constant ξ is the value of an extra unit resource when the quantity already consumed is zero and γ determines the rate at which the value of an extra unit decreases as the amount consumed increases. Fig 4 shows formula (1) for ξ = 500 with diminish- ing utility at a rate of γ = 0.1, 0.2 and 0.3. Given the price per unit resource, the consumer can maximise its utility by consuming S1 units where S = S1 solves the marginal utility function (1) equal to the price per unit. ...

Similar publications

Article
Full-text available
The article researches the project management upgrade of an aeronautical system in conditions of uncertainty. An approach based on artificial immune systems and Bayesian networks is suggested. Santrauka Straipsnyje tiriamas aeronautikos sistemų atnaujinimo projektų valdymas nepastoviomis sąlygomis. Siūloma naudoti dirbtines atsparias sistemas ir...

Citations

... So far, there is a lack of understanding about how affective aspects (emotions) influence the level of trust or reputation an agent has when it is inserted into an automated system for information exchange, such as an evaluation system. Although there are models presenting affective characteristics, most of them (i) employ the BDI architecture [7] or (ii) are metamodels constructed upon fixed numeric models [8], [9]. Until this point, trust computation based on emotions is thus unclear [10] and most proposed techniques do not rely on more sophisticated theories, e.g. ...
Conference Paper
Trust and reputation mechanisms are part of the logical protection of intelligent agents, preventing malicious agents from acting egotistically or with the intention to damage others. Several studies in Psychology, Neurology and Anthropology claim that emotions are part of human's decision making process. However, there is a lack of understanding about how affective aspects, such as emotions, influence trust or reputation levels of intelligent agents when they are inserted into an information exchange environment, e.g. an evaluation system. In this paper we propose a reputation model that accounts for emotional bonds given by Ekman's basic emotions and inductive machine learning. Our proposal is evaluated by extracting emotions from texts provided by two online human-fed evaluation systems. Empirical results show significant agent's utility improvements with $p < .05$ when compared to non-emotion-wise proposals, thus, showing the need for future research in this area.
... The first cognitive model was proposed by Castelfranchi and Falcone [2001], and it is based on trust as a result of a mental state composed by some cognitive components, such as objective, ability, competence, and dependence. Another example is the model proposed by Neville and Pitt [2004], which is based on the social theory of trust, reputation, recommendation, and learning through direct experiences integrated in the agent reasoning mechanism. The model by Carter and Ghorbani [2003] takes into account the combination of self-esteem, reputation, and familiarity among the agents, while Zhang et al. [2007] argue that intimacy directly influences trust. ...
... Other models that implement delegation concepts are the ones by Carter and Ghorbani [2003], Neville and Pitt [2004], Burnett et al. [2013], Piunti et al. [2012], and Venanzi et al. [2011]. All of them are not specific on this task and they do not have many details about how this process happens. ...
Article
FABR´ICIOFABR´FABR´ICIO ENEMBRECK, PPGIa: Graduate Program on Informatics – Pontifical Catholic University of ParanáParan´Paraná – PUCPR Finding reliable partners to interact with in open environments is a challenging task for software agents, and trust and reputation mechanisms are used to handle this issue. From this viewpoint, we can observe the growing body of research on this subject, which indicates that these mechanisms can be considered key elements to design multiagent systems (MASs). Based on that, this article presents an extensive but not exhaustive review about the most significant trust and reputation models published over the past two decades, and hundreds of models were analyzed using two perspectives. The first one is a combination of trust dimensions and principles proposed by some relevant authors in the field, and the models are discussed using an MAS perspective. The second one is the discussion of these dimensions taking into account some types of interaction found in MASs, such as coalition, argumentation, negotiation, and recommendation. By these analyses, we aim to find significant relations between trust dimensions and types of interaction so it would be possible to construct MASs using the most relevant dimensions according to the types of interaction, which may help developers in the design of MASs.
... The first cognitive model was proposed by Castelfranchi and Falcone [2001], and it is based on trust as a result of a mental state composed by some cognitive components, such as objective, ability, competence, and dependence. Another example is the model proposed by Neville and Pitt [2004], which is based on the social theory of trust, reputation, recommendation, and learning through direct experiences integrated in the agent reasoning mechanism. The model by Carter and Ghorbani [2003] takes into account the combination of self-esteem, reputation, and familiarity among the agents, while Zhang et al. [2007] argue that intimacy directly influences trust. ...
... Other models that implement delegation concepts are the ones by Carter and Ghorbani [2003], Neville and Pitt [2004], Burnett et al. [2013], Piunti et al. [2012], and Venanzi et al. [2011]. All of them are not specific on this task and they do not have many details about how this process happens. ...
Article
Full-text available
Finding reliable partners to interact with in open environments is a challenging task for software agents, and trust and reputation mechanisms are used to handle this issue. From this viewpoint, we can observe the growing body of research on this subject, which indicates that these mechanisms can be considered key elements to design multiagent systems (MASs). Based on that, this article presents an extensive but not exhaustive review about the most significant trust and reputation models published over the past two decades, and hundreds of models were analyzed using two perspectives. The first one is a combination of trust dimensions and principles proposed by some relevant authors in the field, and the models are discussed using an MAS perspective. The second one is the discussion of these dimensions taking into account some types of interaction found in MASs, such as coalition, argumentation, negotiation, and recommendation. By these analyses, we aim to find significant relations between trust dimensions and types of interaction so it would be possible to construct MASs using the most relevant dimensions according to the types of interaction, which may help developers in the design of MASs.
... The growing number of research efforts in the areas of mobile e-commerce agents [15] and agentmediated e-commerce [16,17,18] testify the potential benefits of integrating mobile agents in more advanced e-commerce applications such eauction system raises many new security issues that are quite different from conventional client/server systems. Auction server which accommodates the agents can be attacked by malicious agents. ...
Article
Full-text available
Electronic auction has introduced new processes in the way auction is conducted which require investigation to ensure compliancy to Sharia (Islamic principles that are based on Qur’an and Sunnah) rules. The use of mobile software agent in electronic auction marketplaces adds ubiquity power to the bidders. In particular it allows agents to quicker respond to local changes in auction marketplaces and make bidding strategy decisions faster than remote agents or human participants could. Nevertheless, mobile agent raises issues concerning malicious attacks from a variety of intervening sources that might alter the sensitive information it carries. A multi-agent based e-auction must fit the concept of halal trade and address the problems resulting from fraud activities in online auction. Most auction frauds such as bid shilling and bid shielding also violate Sharia transaction law. If an attacker gains information on sensitive data such as the maximum bid for the bidders, the information can be manipulated to artificially increase prices through bid shilling. A Sharia compliant e-auction must comply with Islamic business law and offer secure and trustworthy trading environment. This paper discusses findings of non-compliances to Sharia rules found in the case studies conducted on a number of major online auction systems. This paper focuses on addressing the bid shilling problem by proposing multi-agent security architecture for Sharia compliant e-auction which encompasses security protocols for mobile agent authentication and confidentiality of sensitive information. Mobile agents are used to carry bidders’ and sellers’ data. Cryptography protocol such as encryption/decryption, digital signature and hash function are used and applied on identified high and low risk data.
... Indeed, one of the effects of this trust-based behaviour is to "discourage malicious behaviours and to isolate incompetent agents" [10]: If the agent needs to rely on the society to achieve its goals, and it is almost always the case in the society models we consider here, they need to be well-considered by other agents, and thus to have a good reputation (so that they would be well-recommended and often relied upon, and that they requests could be accepted more easily). This need for consideration is often a goal in itself. ...
... Here is a synthetic schema of the trust and reputation mechanism in this framework (quoted from [10] This trust management framework has been used with producer and consumer agents, in order to simulate a simple economic market. In such a scenario, the trust and reputation management layer brings useful information to both kinds of actors about the market, telling them who are the efficient agents, and who are the incompetent or malicious ones. ...
... One interesting experiment would be the integration of an implementation of the forgiveness model described in [19] in a trust and reputation management framework like the one presented in [10] and [11]. It is a multi-agent platform based on an economic society (trading agents), which is intuitively a domain where feelings and emotions do not have a significant part in the human counterpart as well as in the specialised software systems. ...
Article
Full-text available
Transposition of human emotions and feelings in the context of software programs (multi-agent systems for instance) has been used in order to improve performances, to help human-computer interactive programs adapt themselves to the user, or to fight disruptive user behaviour caused by lack of conventional social cues. This ISO will examine the use of affective computing (in particular the representation of trust, reputation and forgiveness) in conjunction with software agents to assess the potential forgiveness for ensuring social order in online communities.
... The first cognitive model was proposed by Castelfranchi and Falcone [2001], and it is based on trust as a result of a mental state composed by some cognitive components, such as objective, ability, competence, and dependence. Another example is the model proposed by Neville and Pitt [2004], which is based on the social theory of trust, reputation, recommendation, and learning through direct experiences integrated in the agent reasoning mechanism. The model by Carter and Ghorbani [2003] takes into account the combination of self-esteem, reputation, and familiarity among the agents, while Zhang et al. [2007] argue that intimacy directly influences trust. ...
... Other models that implement delegation concepts are the ones by Carter and Ghorbani [2003], Neville and Pitt [2004], Burnett et al. [2013], Piunti et al. [2012], and Venanzi et al. [2011]. All of them are not specific on this task and they do not have many details about how this process happens. ...
Conference Paper
With the Web popularization, the digital inclusion comes motivating the research and development of engines of more sophisticated search each time concentrating in the concepts of the Web Semantics that incorporates semantic elements in the formatting of their concepts and propitiates a search with bigger intelligence. Such intelligence characterizes for the quality or precision of the knowledge retrieved or returned in reply to one determined request, involving a structure of how the knowledge is represented, accessed, retrieved, manipulated and enriched. The intention of this process is to enable to the executive professionals to interact naturally with computers using the natural language where the computers understand the submitted questions it and answer using the same terms of the question in understandable way, guiding them in complex decisions, allowing that they can occupy of nobler tasks, the businesses of their company.
... A socio-cognitive multi-agent system is a system in which the social relationships between agents are conditioned by cognitive, economic, or emotive factors. For example, in [23] we defined a framework which accounted for socio-cognitive and socio-economic factors to inform a trust decision in e-commerce. This showed how 'first encounter' decisions were best dealt with by a complex calculation based on risk exposure and recommendations ('risk' trust); but how 'nth encounter' decisions were short-cut based on personal experiences (so-called 'reliance' trust). ...
Article
Full-text available
Ad hoc networks can be formed from arbitrary collec- tions of individual people (forming online computer-mediated com- munities), mobile routers (forming data communication networks) or electronic business processes (forming virtual enterprises). One way to deal with common features of dynamism in the net- work topology and membership, conflicts, sub-ideal operation, se- curity, and the general need for continuous operation in the absence of a centralised facility, is to treat the ad hoc network as a norm- governed multi-agent system and use participatory adaptation as the mechanism for achieving autonomic capability (i.e. a global system response derived from the collective local behaviours and interac- tions of the individuals comprising the system). Therefore, complementing the formal representation of organisa- tional behaviour defined in terms of roles, rules, norms, etc., this au- tonomic capability is at least partially derived from an underlying social network which plays a significant role in determining how, for example, conflicts are resolved and how the organisation itself is run. This position statement presents initial developments in what we call micro-social systems, which arise from interleaving a logical model of norm-governed systems with a mathematical model of so- cial networks, and its application to issues of resource allocation, security, conflict resolution and self-adaptation in ad hoc networks.
... In the following we summarise our retail marketplace scenario, and a brief overview of the main participants i.e. the producer and consumer agents. Due to space limitations all technical detail has been omitted, a full account of the framework can be found in [9]. ...
... As an extension of the CS C EF specified in [9], confidence in the trust belief (14) forms an important component of our decision making. In order for the agent to make a trust decision it requires a certain amount of confidence in its trust belief. ...
... As well as automation of certain consumer/producer behaviours such as price setting and the quantity decision. How the framework achieves this is more clearly defined in [9]. ...
Article
Full-text available
We present an experimental evaluation of a (com-putational) socio-cognitive and economic framework (CS C EF) for agent decision making in distributed e-commerce markets. The framework formalises so-cial theories of trust, reputation, recommendation and learning from direct experience and integrates these socio-cognitive formalisms with the agent's economic reasoning. Together the socio-cognitive and economic el-ements are designed to enable agents to cope with mali-cious or incompetent behaviour in an e-commerce envi-ronment. To evaluate the performance of our framework we have developed a multi-agent simulation environ-ment. The results of our simulations show that the in-tegration of social behaviour into the trading agent architecture can not only act as an effective mecha-nism for regulation of distributed agent-mediated market places, but also improve the agents ability to max-imise its owners utility.
Chapter
Open systems typically occur in a wide range of applications, from virtual organisations and vehicular networks to cloud/grid computing and reconfigurable manufacturing. All these applications encounter a similar problem: how does a system component reliably complete its own tasks, when successful task completion depends on interaction and interoperation with other, potentially unreliable and conflicting, components. One solution to this problem is trust: depending on a second party requires a willingness to expose oneself to risk, and to the extent that this ‘willingness’ can be quantified or qualified, it can be used to inform a binary trust decision. Therefore, a formal model of the social relationship underpinning such trust decisions is essential for conditioning bipartite interactions between components in an open system. However, there are a number of issues that follow from this – for example: what is to be done when the outcome of the trust decision is contrary to expectation? Are there positive externalities that can be derived from a successful trust decision? and: How can we ensure that outcomes of collective decision-making in such circumstances are, in some sense, ‘correct’ and/or ‘fair’. Our answers to these question have been found in the formalisation of other social relations, respectively forgiveness, social capital and justice. This chapter presents a survey of the development of formal models of social relations, from trust to justice via forgiveness and social capital, all of which address the issue of reliable interoperation in open systems.